148 resultados para Retinal image quality metric
Resumo:
Images obtained from high-throughput mass spectrometry (MS) contain information that remains hidden when looking at a single spectrum at a time. Image processing of liquid chromatography-MS datasets can be extremely useful for quality control, experimental monitoring and knowledge extraction. The importance of imaging in differential analysis of proteomic experiments has already been established through two-dimensional gels and can now be foreseen with MS images. We present MSight, a new software designed to construct and manipulate MS images, as well as to facilitate their analysis and comparison.
Resumo:
This paper is a joint effort between five institutionsthat introduces several novel similarity measures andcombines them to carry out a multimodal segmentationevaluation. The new similarity measures proposed arebased on the location and the intensity values of themisclassified voxels as well as on the connectivity andthe boundaries of the segmented data. We showexperimentally that the combination of these measuresimprove the quality of the evaluation. The study that weshow here has been carried out using four differentsegmentation methods from four different labs applied toa MRI simulated dataset of the brain. We claim that ournew measures improve the robustness of the evaluation andprovides better understanding about the differencebetween segmentation methods.
Resumo:
The aim was to propose a strategy for finding reasonable compromises between image noise and dose as a function of patient weight. Weighted CT dose index (CTDI(w)) was measured on a multidetector-row CT unit using CTDI test objects of 16, 24 and 32 cm in diameter at 80, 100, 120 and 140 kV. These test objects were then scanned in helical mode using a wide range of tube currents and voltages with a reconstructed slice thickness of 5 mm. For each set of acquisition parameter image noise was measured and the Rose model observer was used to test two strategies for proposing a reasonable compromise between dose and low-contrast detection performance: (1) the use of a unique noise level for all test object diameters, and (2) the use of a unique dose efficacy level defined as the noise reduction per unit dose. Published data were used to define four weight classes and an acquisition protocol was proposed for each class. The protocols have been applied in clinical routine for more than one year. CTDI(vol) values of 6.7, 9.4, 15.9 and 24.5 mGy were proposed for the following weight classes: 2.5-5, 5-15, 15-30 and 30-50 kg with image noise levels in the range of 10-15 HU. The proposed method allows patient dose and image noise to be controlled in such a way that dose reduction does not impair the detection of low-contrast lesions. The proposed values correspond to high- quality images and can be reduced if only high-contrast organs are assessed.
Resumo:
Purpose: Many countries used the PGMI (P=perfect, G=good, M=moderate, I=inadequate) classification system for assessing the quality of mammograms. Limits inherent to the subjectivity of this classification have been shown. Prior to introducing this system in Switzerland, we wanted to better understand the origin of this subjectivity in order to minimize it. Our study aimed at identifying the main determinants of the variability of the PGMI system and which criteria are the most subjected to subjectivity. Methods and Materials: A focus group composed of 2 experienced radiographers and 2 radiologists specified each PGMI criterion. Ten raters (6 radiographers and 4 radiologists) evaluated twice a panel of 40 randomly selected mammograms (20 analogic and 20 digital) according to these specified PGMI criteria. The PGMI classification was assessed and the intra- and inter-rater reliability was tested for each professional group (radiographer vs radiologist), image technology (analogic vs digital) and PGMI criterion. Results: Some 3,200 images were assessed. The intra-rater reliability appears to be weak, particularly in respect to inter-rater variability. Subjectivity appears to be largely independent of the professional group and image technology. Aspects of the PGMI classification criteria most subjected to variability were identified. Conclusion: Post-test discussions enabled to specify more precisely some criteria. This should reduce subjectivity when applying the PGMI classification system. A concomitant, important effort in training radiographers is also necessary.
Resumo:
In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed.
Resumo:
l'imagerie par résonance magnétique (IRMC) est une technologie utilisée depuis les aimées quatre¬-vingts dans le monde de la cardiologie. Cette technique d'imagerie non-invasive permet d'acquérir Ses images du coeur en trois dimensions, dans n'importe quel, plan, sans application de radiation, et en haute résolution. Actuellement, cette technique est devenue un référence dans l'évaluation et 'l'investigation de différentes pathologies cardiaques. La morphologie cardiaque, la fonction des ventricules ainsi que leur contraction, la perfusion tissulaire ainsi que la viabilité tissulaire peuvent être caractérisés en utilisant différentes séquences d'imagerie. Cependant, cette technologie repose sur des principes physiques complexes et la mise en pratique de cette technique se heurte à la difficulté d'évaluer un organe en mouvement permanent. L'IRM cardiaque est donc sujette à différents artefacts qui perturbent l'interprétation des examens et peuvent diminuer la précision diagnostique de cette technique. A notre connaissance, la plupart des images d'IRMC sont analysées et interprétées sans évaluation rigoureuse de la qualité intrinsèque de l'examen. Jusqu'à présent, et à notre connaissance, aucun critère d'évaluation de la qualité des examens d'IRMC n'a été clairement déterminé. L'équipe d'IRMC du CHUV, dirigée par le Prof J. Schwitter, a recensé une liste de 35 critères qualitatifs et 12 critères quantitatifs évaluant la qualité d'un examen d'IRMC et les a introduit dans une grille d'évaluation. L'objet de cette étude est de décrire et de valider la reproductibilité des critères figurant dans cette grille d'évaluation, par l'interprétation simultanée d'examens IRMC par différents observateurs (cardiologues spécialisés en IRM, étudiant en médecine, infirmière spécialisée). Notre étude a permis de démontrer que les critères définis pour l'évaluation des examens d'IRMC sont robustes, et permettent une bonne reproductibilité intra- et inter-observateurs. Cette étude valide ainsi l'utilisation de ces critères de qualité dans le cadre de l'imagerie par résonance magnétique cardiaque. D'autres études sont encore nécessaires afin de déterminer l'impact de la qualité de l'image sur la précision diagnostique de cette technique. Les critères standardisés que nous avons validés seront utilisés pour évaluer la qualité des images dans le cadre d'une étude à échelle européenne relative à l'IRMC : "l'EuroCMR registry". Parmi les autres utilités visées par ces critères de qualité, citons notamment la possibilité d'avoir une référence d'évaluation de la qualité d'examen pour toutes les futures études cliniques utilisant la technologie d'IRMC, de permettre aux centres d'IRMC de quantifier leur niveau de qualité, voire de créer un certificat de standard de qualité pour ces centres, d'évaluer la reproductibilité de l'évaluation des images par différents observateurs d'un même centre, ou encore d'évaluer précisément la qualité des séquences développées à l'avenir dans le monde de l'IRMC.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
PURPOSE: To study the combination of oligodeoxynucleotides (ODNs) intravitreous injection and saline transpalpebral iontophoresis on the delivery of ODNs to photoreceptors in the newborn rd1/rd1 mice. METHODS: Cathodal or anodal transpalpebral iontophoresis (1.43 mA/cm(2) for 5 min) was applied to eyes of postnatal day 7 (PN7) rd1/rd1 mice immediately before the intravitreous injection of ODNs. The effect of cathodal iontophoresis after ODNs injection was also evaluated. The influence of current intensity (0.5, 1.5, and 2.5 mA) was assayed with cathodal iontophoresis performed prior to ODNs injection. The duration of current-induced facilitation of ODNs delivery to photoreceptors was evaluated for 6 h following iontophoresis. One group of control eyes received cathodal iontophoresis prior to the intravitreous injection of phosphate buffered saline (PBS) or hexachlorofluorescein (Hex). The second control group received ODN or Hex intravitreous injection without iontophoresis. The penetration of fluorescent ODNs in the outer nuclear layer (ONL) was quantified by image analysis of the ONL fluorescence intensity on cryosection microphotographs. Integrity of ODN was assessed using acrylamide gel migration after its extraction from the retina of treated mice. The integrity of retinal structure, 1 and 24 h after iontophoresis, was analyzed using light and electron microscopy. RESULTS: Transpalpebral anodal or cathodal saline iontophoresis enhanced the penetration of ODNs in all retinal layers. Cathodal iontophoresis was more efficient than anodal iontophoresis in enhancing the tissue penetration of the injected ODN. Photoreceptor delivery of ODN was significantly higher when cathodal saline transpalpebral iontophoresis was applied prior than after the injection. The extent of enhanced tissue penetration decreased in parallel to the increased interval between iontophoresis application and the intravitreous injection. Current of 1.5 mA was safe and optimal for the delivery of ODNs to the ONL. One hour after iontophoresis followed by injection, ODN extracted from the retina of treated eyes remained intact. Histology and electron microscopy observations demonstrated that iontophoresis using the optimal parameters did not induce any permanent tissue alterations or structure damage. CONCLUSIONS: Saline transpalpebral iontophoresis facilitates the penetration of injected ODNs in photoreceptors for at least 3 h. This method may be considered for photoreceptor targeted gene therapy.
Resumo:
In this paper, we present an efficient numerical scheme for the recently introduced geodesic active fields (GAF) framework for geometric image registration. This framework considers the registration task as a weighted minimal surface problem. Hence, the data-term and the regularization-term are combined through multiplication in a single, parametrization invariant and geometric cost functional. The multiplicative coupling provides an intrinsic, spatially varying and data-dependent tuning of the regularization strength, and the parametrization invariance allows working with images of nonflat geometry, generally defined on any smoothly parametrizable manifold. The resulting energy-minimizing flow, however, has poor numerical properties. Here, we provide an efficient numerical scheme that uses a splitting approach; data and regularity terms are optimized over two distinct deformation fields that are constrained to be equal via an augmented Lagrangian approach. Our approach is more flexible than standard Gaussian regularization, since one can interpolate freely between isotropic Gaussian and anisotropic TV-like smoothing. In this paper, we compare the geodesic active fields method with the popular Demons method and three more recent state-of-the-art algorithms: NL-optical flow, MRF image registration, and landmark-enhanced large displacement optical flow. Thus, we can show the advantages of the proposed FastGAF method. It compares favorably against Demons, both in terms of registration speed and quality. Over the range of example applications, it also consistently produces results not far from more dedicated state-of-the-art methods, illustrating the flexibility of the proposed framework.
Resumo:
The Swiss National Science Foundation Professorships Programme is presented as a scientific programme that aims to promote outstanding young scholars to professorial positions. Academic excellence is presented as the main selection criterion. The emphasis put on the research portfolio and on the age of the candidates means that the beneficiaries of these professorships put forward an image of excellence that is more embedded in data-based sciences than in the humanities and social sciences, thus strengthening the domination of a sector scientific activity essentially occupied by men over the sector that has opened up more widely to women. This paper aims to deconstruct the criteria of academic excellence as they appear in this programme and to show that what seem quality criteria are in fact inspired by a specific androcentric model. These biases tend to undermine the gender equality aims of the programme.
Resumo:
We evaluated the performance of an optical camera based prospective motion correction (PMC) system in improving the quality of 3D echo-planar imaging functional MRI data. An optical camera and external marker were used to dynamically track the head movement of subjects during fMRI scanning. PMC was performed by using the motion information to dynamically update the sequence's RF excitation and gradient waveforms such that the field-of-view was realigned to match the subject's head movement. Task-free fMRI experiments on five healthy volunteers followed a 2×2×3 factorial design with the following factors: PMC on or off; 3.0mm or 1.5mm isotropic resolution; and no, slow, or fast head movements. Visual and motor fMRI experiments were additionally performed on one of the volunteers at 1.5mm resolution comparing PMC on vs PMC off for no and slow head movements. Metrics were developed to quantify the amount of motion as it occurred relative to k-space data acquisition. The motion quantification metric collapsed the very rich camera tracking data into one scalar value for each image volume that was strongly predictive of motion-induced artifacts. The PMC system did not introduce extraneous artifacts for the no motion conditions and improved the time series temporal signal-to-noise by 30% to 40% for all combinations of low/high resolution and slow/fast head movement relative to the standard acquisition with no prospective correction. The numbers of activated voxels (p<0.001, uncorrected) in both task-based experiments were comparable for the no motion cases and increased by 78% and 330%, respectively, for PMC on versus PMC off in the slow motion cases. The PMC system is a robust solution to decrease the motion sensitivity of multi-shot 3D EPI sequences and thereby overcome one of the main roadblocks to their widespread use in fMRI studies.
Resumo:
Very large molecular systems can be calculated with the so called CNDOL approximate Hamiltonians that have been developed by avoiding oversimplifications and only using a priori parameters and formulas from the simpler NDO methods. A new diagonal monoelectronic term named CNDOL/21 shows great consistency and easier SCF convergence when used together with an appropriate function for charge repulsion energies that is derived from traditional formulas. It is possible to obtain a priori molecular orbitals and electron excitation properties after the configuration interaction of single excited determinants with reliability, maintaining interpretative possibilities even being a simplified Hamiltonian. Tests with some unequivocal gas phase maxima of simple molecules (benzene, furfural, acetaldehyde, hexyl alcohol, methyl amine, 2,5 dimethyl 2,4 hexadiene, and ethyl sulfide) ratify the general quality of this approach in comparison with other methods. The calculation of large systems as porphine in gas phase and a model of the complete retinal binding pocket in rhodopsin with 622 basis functions on 280 atoms at the quantum mechanical level show reliability leading to a resulting first allowed transition in 483 nm, very similar to the known experimental value of 500 nm of "dark state." In this very important case, our model gives a central role in this excitation to a charge transfer from the neighboring Glu(-) counterion to the retinaldehyde polyene chain. Tests with gas phase maxima of some important molecules corroborate the reliability of CNDOL/2 Hamiltonians.
Resumo:
The functional method is a new test theory using a new scoring method that assumes complexity in test structure, and thus takes into account every correlation between factors and items. The main specificity of the functional method is to model test scores by multiple regression instead of estimating them by using simplistic sums of points. In order to proceed, the functional method requires the creation of hyperspherical measurement space, in which item responses are expressed by their correlation with orthogonal factors. This method has three main qualities. First, measures are expressed in the absolute metric of correlations; therefore, items, scales and persons are expressed in the same measurement space using the same single metric. Second, factors are systematically orthogonal and without errors, which is optimal in order to predict other outcomes. Such predictions can be performed to estimate how one would answer to other tests, or even to model one's response strategy if it was perfectly coherent. Third, the functional method provides measures of individuals' response validity (i.e., control indices). Herein, we propose a standard procedure in order to identify whether test results are interpretable and to exclude invalid results caused by various response biases based on control indices.