989 resultados para correction methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. This thesis is framed in experimental software engineering. More concretely, it addresses the problems arisen when assessing process conformance in test-driven development experiments conducted by UPM's Experimental Software Engineering group. Process conformance was studied using the Eclipse's plug-in tool Besouro. It has been observed that Besouro does not work correctly in some circumstances. It creates doubts about the correction of the existing experimental data which render it useless. Aim. The main objective of this work is the identification and correction of Besouro's faults. A secondary goal is fixing the datasets already obtained in past experiments to the maximum possible extent. This way, existing experimental results could be used with confidence. Method. (1) Testing Besouro using different sequences of events (creation methods, assertions etc..) to identify the underlying faults. (2) Fix the code and (3) fix the datasets using code specially created for this purpose. Results. (1) We confirmed the existence of several fault in Besouro's code that affected to Test-First and Test-Last episode identification. These faults caused the incorrect identification of 20% of episodes. (2) We were able to fix Besouro's code. (3) The correction of existing datasets was possible, subjected to some restrictions (such us the impossibility of tracing code size increase to programming time. Conclusion. The results of past experiments dependent upon Besouro's data could no be trustable. We have the suspicion that more faults remain in Besouro's code, whose identification requires further analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis se presenta un análisis en profundidad de cómo se deben utilizar dos tipos de métodos directos, Lucas-Kanade e Inverse Compositional, en imágenes RGB-D y se analiza la capacidad y precisión de los mismos en una serie de experimentos sintéticos. Estos simulan imágenes RGB, imágenes de profundidad (D) e imágenes RGB-D para comprobar cómo se comportan en cada una de las combinaciones. Además, se analizan estos métodos sin ninguna técnica adicional que modifique el algoritmo original ni que lo apoye en su tarea de optimización tal y como sucede en la mayoría de los artículos encontrados en la literatura. Esto se hace con el fin de poder entender cuándo y por qué los métodos convergen o divergen para que así en el futuro cualquier interesado pueda aplicar los conocimientos adquiridos en esta tesis de forma práctica. Esta tesis debería ayudar al futuro interesado a decidir qué algoritmo conviene más en una determinada situación y debería también ayudarle a entender qué problemas le pueden dar estos algoritmos para poder poner el remedio más apropiado. Las técnicas adicionales que sirven de remedio para estos problemas quedan fuera de los contenidos que abarca esta tesis, sin embargo, sí se hace una revisión sobre ellas.---ABSTRACT---This thesis presents an in-depth analysis about how direct methods such as Lucas- Kanade and Inverse Compositional can be applied in RGB-D images. The capability and accuracy of these methods is also analyzed employing a series of synthetic experiments. These simulate the efects produced by RGB images, depth images and RGB-D images so that diferent combinations can be evaluated. Moreover, these methods are analyzed without using any additional technique that modifies the original algorithm or that aids the algorithm in its search for a global optima unlike most of the articles found in the literature. Our goal is to understand when and why do these methods converge or diverge so that in the future, the knowledge extracted from the results presented here can efectively help a potential implementer. After reading this thesis, the implementer should be able to decide which algorithm fits best for a particular task and should also know which are the problems that have to be addressed in each algorithm so that an appropriate correction is implemented using additional techniques. These additional techniques are outside the scope of this thesis, however, they are reviewed from the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to obtain the exact value of the keratometric index (nkexact) and to clinically validate a variable keratometric index (nkadj) that minimizes this error. Methods: The nkexact value was determined by obtaining differences (DPc) between keratometric corneal power (Pk) and Gaussian corneal power (PGauss c ) equal to 0. The nkexact was defined as the value associated with an equivalent difference in the magnitude of DPc for extreme values of posterior corneal radius (r2c) for each anterior corneal radius value (r1c). This nkadj was considered for the calculation of the adjusted corneal power (Pkadj). Values of r1c ∈ (4.2, 8.5) mm and r2c ∈ (3.1, 8.2) mm were considered. Differences of True Net Power with PGauss c , Pkadj, and Pk(1.3375) were calculated in a clinical sample of 44 eyes with keratoconus. Results: nkexact ranged from 1.3153 to 1.3396 and nkadj from 1.3190 to 1.3339 depending on the eye model analyzed. All the nkadj values adjusted perfectly to 8 linear algorithms. Differences between Pkadj and PGauss c did not exceed 60.7 D (Diopter). Clinically, nk = 1.3375 was not valid in any case. Pkadj and True Net Power and Pk(1.3375) and Pkadj were statistically different (P , 0.01), whereas no differences were found between PGauss c and Pkadj (P . 0.01). Conclusions: The use of a single value of nk for the calculation of the total corneal power in keratoconus has been shown to be imprecise, leading to inaccuracies in the detection and classification of this corneal condition. Furthermore, our study shows the relevance of corneal thickness in corneal power calculations in keratoconus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To compare the manifest refractive cylinder (MRC) predictability of myopic astigmatism laser in situ keratomileusis (LASIK) between eyes with low and high ocular residual astigmatism (ORA). Setting: London Vision Clinic, London, United Kingdom. Design: Retrospective case study. Methods: The ORA was considered the vector difference between the MRC and the corneal astigmatism. The index of success (IoS), difference vector ÷ MRC, was analyzed for different groups as follows: stage 1, low ORA (ORA ÷ MRC <1), high ORA (ORA ÷ MRC ≥1); stage 2, low ORA group reduced to match the high ORA group for MRC; stage 3, grouped by ORA magnitude with low ORA (<0.50 diopters [D]), mid ORA (0.50 to 1.24 D), and high ORA (≥1.25 D); stage 4, high ORA group subdivided into low (<0.75 D) and high (≥0.75 D) corneal astigmatism. Results: For stage 1, the mean preoperative MRC and mean IoS were −1.32 D ± 0.65 (SD) (range −0.55 to −3.77 D) and 0.27, respectively, for low ORA and −0.79 ± 0.20 D (range −0.56 to −2.05 D) and 0.37, respectively, for high ORA. For stage 2, the mean IoS increased to 0.32 for low ORA. For stage 3, the mean IoS was 0.28, 0.29, and 0.31 for low ORA, mid ORA, and high ORA, respectively. For stage 4, the mean IoS was 0.20 for high ORA/low corneal astigmatism and 0.35 for high ORA/high corneal astigmatism. Conclusions: The MRC predictability was slightly worse in eyes with high ORA when grouped by the ORA ÷ MRC. Matching for the MRC and grouping by ORA magnitude resulted in similar predictability; however, eyes with high ORA and high corneal astigmatism were less predictable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new Stata command called -mgof- is introduced. The command is used to compute distributional tests for discrete (categorical, multinomial) variables. Apart from classic large sample $\chi^2$-approximation tests based on Pearson's $X^2$, the likelihood ratio, or any other statistic from the power-divergence family (Cressie and Read 1984), large sample tests for complex survey designs and exact tests for small samples are supported. The complex survey correction is based on the approach by Rao and Scott (1981) and parallels the survey design correction used for independence tests in -svy:tabulate-. The exact tests are computed using Monte Carlo methods or exhaustive enumeration. An exact Kolmogorov-Smirnov test for discrete data is also provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND PURPOSE In clinical diagnosis, medical image segmentation plays a key role in the analysis of pathological regions. Despite advances in automatic and semi-automatic segmentation techniques, time-effective correction tools are commonly needed to improve segmentation results. Therefore, these tools must provide faster corrections with a lower number of interactions, and a user-independent solution to reduce the time frame between image acquisition and diagnosis. METHODS We present a new interactive method for correcting image segmentations. Our method provides 3D shape corrections through 2D interactions. This approach enables an intuitive and natural corrections of 3D segmentation results. The developed method has been implemented into a software tool and has been evaluated for the task of lumbar muscle and knee joint segmentations from MR images. RESULTS Experimental results show that full segmentation corrections could be performed within an average correction time of 5.5±3.3 minutes and an average of 56.5±33.1 user interactions, while maintaining the quality of the final segmentation result within an average Dice coefficient of 0.92±0.02 for both anatomies. In addition, for users with different levels of expertise, our method yields a correction time and number of interaction decrease from 38±19.2 minutes to 6.4±4.3 minutes, and 339±157.1 to 67.7±39.6 interactions, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A phantom that can be used for mapping geometric distortion in magnetic resonance imaging (MRI) is described. This phantom provides an array of densely distributed control points in three-dimensional (3D) space. These points form the basis of a comprehensive measurement method to correct for geometric distortion in MR images arising principally from gradient field non-linearity and magnet field inhomogeneity. The phantom was designed based on the concept that a point in space can be defined using three orthogonal planes. This novel design approach allows for as many control points as desired. Employing this novel design, a highly accurate method has been developed that enables the positions of the control points to be measured to sub-voxel accuracy. The phantom described in this paper was constructed to fit into a body coil of a MRI scanner, (external dimensions of the phantom were: 310 mm x 310 mm x 310 mm), and it contained 10,830 control points. With this phantom, the mean errors in the measured coordinates of the control points were on the order of 0.1 mm or less, which were less than one tenth of the voxel's dimensions of the phantom image. The calculated three-dimensional distortion map, i.e., the differences between the image positions and true positions of the control points, can then be used to compensate for geometric distortion for a full image restoration. It is anticipated that this novel method will have an impact on the applicability of MRI in both clinical and research settings. especially in areas where geometric accuracy is highly required, such as in MR neuro-imaging. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present the correction of the geometric distortion measured in the clinical magnetic resonance imaging (MRI) systems reported in the preceding paper (Part 1) using a 3D method based on the phantom-mapped geometric distortion data. This method allows the correction to be made on phantom images acquired without or with the vendor correction applied. With the vendor's 2D correction applied, the method corrects for both the residual geometric distortion still present in the plane in which the correction method was applied (the axial plane) and the uncorrected geometric distortion along the axis non-nal to the plane. The evaluation of the effectiveness of the correction using this new method was carried out through analyzing the residual geometric distortion in the corrected phantom images. The results show that the new method can restore the distorted images in 3D nearly to perfection. For all the MRI systems investigated, the mean absolute deviations in the positions of the control points (along x-, y- and z-axes) measured on the corrected phantom images were all less than 0.2 mm. The maximum absolute deviations were all below similar to0.8 mm. As expected, the correction of the phantom images acquired with the vendor's correction applied in the axial plane performed equally well. Both the geometric distortion still present in the axial plane after applying the vendor's correction and the uncorrected distortion along the z-axis have all been restored. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The correction of presbyopia and restoration of true accommodative function to the ageing eye is the focus of much ongoing research and clinical work. A range of accommodating intraocular lenses (AIOLs) implanted during cataract surgery has been developed and they are designed to change either their position or shape in response to ciliary muscle contraction to generate an increase in dioptric power. Two main design concepts exist. First, axial shift concepts rely on anterior axial movement of one or two optics creating accommodative ability. Second, curvature change designs are designed to provide significant amplitudes of accommodation with little physical displacement. Single-optic devices have been used most widely, although the true accommodative ability provided by forward shift of the optic appears limited and recent findings indicate that alternative factors such as flexing of the optic to alter ocular aberrations may be responsible for the enhanced near vision reported in published studies. Techniques for analysing the performance of AIOLs have not been standardised and clinical studies have reported findings using a wide range of both subjective and objective methods, making it difficult to gauge the success of these implants. There is a need for longitudinal studies using objective methods to assess long-term performance of AIOLs and to determine if true accommodation is restored by the designs available. While dual-optic and curvature change IOLs are designed to provide greater amplitudes of accommodation than is possible with single-optic devices, several of these implants are in the early stages of development and require significant further work before human use is possible. A number of challenges remain and must be addressed before the ultimate goal of restoring youthful levels of accommodation to the presbyopic eye can be achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose To develop a standardized questionnaire of near visual function and satisfaction to complement visual function evaluations of presbyopic corrections. Setting Eye Clinic, School of Life and Health Sciences, Aston University, Midland Eye Institute and Solihull Hospital, Birmingham, United Kingdom. Design Questionnaire development. Methods A preliminary 26-item questionnaire of previously used near visual function items was completed by patients with monofocal intraocular lenses (IOLs), multifocal IOLs, accommodating IOLs, multifocal contact lenses, or varifocal spectacles. Rasch analysis was used for item reduction, after which internal and test–retest reliabilities were determined. Construct validity was determined by correlating the resulting Near Activity Visual Questionnaire (NAVQ) scores with near visual acuity and critical print size (CPS), which was measured using the Minnesota Low Vision Reading Test chart. Discrimination ability was assessed through receiver-operating characteristic (ROC) curve analysis. Results One hundred fifty patients completed the questionnaire. Item reduction resulted in a 10-item NAVQ with excellent separation (2.92), internal consistency (Cronbach a = 0.95), and test–retest reliability (intraclass correlation coefficient = 0.72). Correlations of questionnaire scores with near visual acuity (r = 0.32) and CPS (r = 0.27) provided evidence of validity, and discrimination ability was excellent (area under ROC curve = 0.91). Conclusion Results show the NAVQ is a reliable, valid instrument that can be incorporated into the evaluation of presbyopic corrections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Presbyopia is an age-related eye condition where one of the signs is the reduction in the amplitude of accommodation, resulting in the loss of ability to change the eye's focus from far to near. It is the most common age-related ailments affecting everyone around their mid-40s. Methods for the correction of presbyopia include contact lens and spectacle options but the surgical correction of presbyopia still remains a significant challenge for refractive surgeons. Surgical strategies for dealing with presbyopia may be extraocular (corneal or scleral) or intraocular (removal and replacement of the crystalline lens or some type of treatment on the crystalline lens itself). There are however a number of limitations and considerations that have limited the widespread acceptance of surgical correction of presbyopia. Each surgical strategy presents its own unique set of advantages and disadvantages. For example, lens removal and replacement with an intraocular lens may not be preferable in a young patient with presbyopia without a refractive error. Similarly treatment on the crystalline lens may not be a suitable choice for a patient with early signs of cataract. This article is a review of the options available and those that are in development stages and are likely to be available in the near future for the surgical correction of presbyopia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To provide a consistent standard for the evaluation of different types of presbyopic correction. SETTING: Eye Clinic, School of Life and Health Sciences, Aston University, Birmingham, United Kingdom. METHODS: Presbyopic corrections examined were accommodating intraocular lenses (IOLs), simultaneous multifocal and monovision contact lenses, and varifocal spectacles. Binocular near visual acuity measured with different optotypes (uppercase letters, lowercase letters, and words) and reading metrics assessed with the Minnesota Near Reading chart (reading acuity, critical print size [CPS], CPS reading speed) were intercorrelated (Pearson product moment correlations) and assessed for concordance (intraclass correlation coefficients [ICC]) and agreement (Bland-Altman analysis) for indication of clinical usefulness. RESULTS: Nineteen accommodating IOL cases, 40 simultaneous contact lens cases, and 38 varifocal spectacle cases were evaluated. Other than CPS reading speed, all near visual acuity and reading metrics correlated well with each other (r>0.70, P<.001). Near visual acuity measured with uppercase letters was highly concordant (ICC, 0.78) and in close agreement with lowercase letters (+/- 0.17 logMAR). Near word acuity agreed well with reading acuity (+/- 0.16 logMAR), which in turn agreed well with near visual acuity measured with uppercase letters 0.16 logMAR). Concordance (ICC, 0.18 to 0.46) and agreement (+/- 0.24 to 0.30 logMAR) of CPS with the other near metrics was moderate. CONCLUSION: Measurement of near visual ability in presbyopia should be standardized to include assessment of near visual acuity with logMAR uppercase-letter optotypes, smallest logMAR print size that maintains maximum reading speed (CPS), and reading speed. J Cataract Refract Surg 2009; 35:1401-1409 (C) 2009 ASCRS and ESCRS

Relevância:

30.00% 30.00%

Publicador:

Resumo:

High levels of corneal astigmatism are prevalent in a significant proportion of the population. During cataract surgery pre-existing astigmatism can be corrected using single or paired incisions on the steep axis of the cornea, using relaxing incisions or with the use of a toric intraocular lens. This review provides an overview of the conventional methods of astigmatic correction during cataract surgery and in particular, discusses the various types of toric lenses presently available and the techniques used in determining the correct axis for the placement of such lenses. Furthermore, the potential causes of rotation in toric lenses are identified, along with techniques for assessing and quantifying the amount of rotation and subsequent management options for addressing post-operative rotation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Fitting a linear regression to data provides much more information about the relationship between two variables than a simple correlation test. A goodness of fit test of the line should always be carried out. Hence, r squared estimates the strength of the relationship between Y and X, ANOVA whether a statistically significant line is present, and the ‘t’ test whether the slope of the line is significantly different from zero. 2. Always check whether the data collected fit the assumptions for regression analysis and, if not, whether a transformation of the Y and/or X variables is necessary. 3. If the regression line is to be used for prediction, it is important to determine whether the prediction involves an individual y value or a mean. Care should be taken if predictions are made close to the extremities of the data and are subject to considerable error if x falls beyond the range of the data. Multiple predictions require correction of the P values. 3. If several individual regression lines have been calculated from a number of similar sets of data, consider whether they should be combined to form a single regression line. 4. If the data exhibit a degree of curvature, then fitting a higher-order polynomial curve may provide a better fit than a straight line. In this case, a test of whether the data depart significantly from a linear regression should be carried out.