962 resultados para Direct digital detector images
Resumo:
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Resumo:
Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.
Resumo:
ABSTRACT In recent years, geotechnologies as remote and proximal sensing and attributes derived from digital terrain elevation models indicated to be very useful for the description of soil variability. However, these information sources are rarely used together. Therefore, a methodology for assessing and specialize soil classes using the information obtained from remote/proximal sensing, GIS and technical knowledge has been applied and evaluated. Two areas of study, in the State of São Paulo, Brazil, totaling approximately 28.000 ha were used for this work. First, in an area (area 1), conventional pedological mapping was done and from the soil classes found patterns were obtained with the following information: a) spectral information (forms of features and absorption intensity of spectral curves with 350 wavelengths -2,500 nm) of soil samples collected at specific points in the area (according to each soil type); b) obtaining equations for determining chemical and physical properties of the soil from the relationship between the results obtained in the laboratory by the conventional method, the levels of chemical and physical attributes with the spectral data; c) supervised classification of Landsat TM 5 images, in order to detect changes in the size of the soil particles (soil texture); d) relationship between classes relief soils and attributes. Subsequently, the obtained patterns were applied in area 2 obtain pedological classification of soils, but in GIS (ArcGIS). Finally, we developed a conventional pedological mapping in area 2 to which was compared with a digital map, ie the one obtained only with pre certain standards. The proposed methodology had a 79 % accuracy in the first categorical level of Soil Classification System, 60 % accuracy in the second category level and became less useful in the categorical level 3 (37 % accuracy).
Resumo:
We have developed a digital holographic microscope (DHM), in a transmission mode, especially dedicated to the quantitative visualization of phase objects such as living cells. The method is based on an original numerical algorithm presented in detail elsewhere [Cuche et al., Appl. Opt. 38, 6994 (1999)]. DHM images of living cells in culture are shown for what is to our knowledge the first time. They represent the distribution of the optical path length over the cell, which has been measured with subwavelength accuracy. These DHM images are compared with those obtained by use of the widely used phase contrast and Nomarski differential interference contrast techniques.
Resumo:
Purpose: Cervical foraminal injection performed with a direct foraminal approach may induce serious neurologic complications. We describe a technique of CT-guided cervical facet joint (CFJ) injection as an indirect foraminal injection, including feasibility and diffusion pathways of the contrast agent. Methods and materials: Retrospective study included 84 punctures in 65 consecutive patients presenting neck pain and/or radiculopathy related to osteoarthritis or soft disc herniation. CT images were obtained from C2 to T1 in supine position, with a metallic landmark on the skin. CFJ punctures were performed by MSK senior radiologists with a lateral approach. CT control of the CFJ opacification was performed after injections of contrast agent (1 ml), followed by slow-acting corticosteroid (25 mg). CFJ opacification was considered as successful when joint space and/or capsular recess opacification occurred. The diffusion of contrast agent in foraminal and epidural spaces was recorded. We assessed the epidural diffusion both on axial and sagittal images, with a classification in two groups (small diffusion or large diffusion). Results: CFJ opacification was successful in 82% (69/84). An epidural and/or foraminal opacification was obtained in 74% (51/69). A foraminal opacification occurred in 92% (47/51) and an epidural opacification in 63% (32/51), with small diffusion in 47% (15/32) and large diffusion in 53% (17/32). No complication occurred. Conclusion: CT- guided CFJ injection is easy to perform and safe. It is most often successful, with a frequent epidural and/or foraminal diffusion of the contrast agent. This technique could be an interesting and safe alternative to foraminal cervical injection.
Resumo:
Images of myocardial strain can be used to diagnose heart disease, plan and monitor treatment, and to learn about cardiac structure and function. Three-dimensional (3D) strain is typically quantified using many magnetic resonance (MR) images obtained in two or three orthogonal planes. Problems with this approach include long scan times, image misregistration, and through-plane motion. This article presents a novel method for calculating cardiac 3D strain using a stack of two or more images acquired in only one orientation. The zHARP pulse sequence encodes in-plane motion using MR tagging and out-of-plane motion using phase encoding, and has been previously shown to be capable of computing 3D displacement within a single image plane. Here, data from two adjacent image planes are combined to yield a 3D strain tensor at each pixel; stacks of zHARP images can be used to derive stacked arrays of 3D strain tensors without imaging multiple orientations and without numerical interpolation. The performance and accuracy of the method is demonstrated in vitro on a phantom and in vivo in four healthy adult human subjects.
Resumo:
A digitized image method was compared with a standard washing technique for measuring citrus roots in the field. Video pictures of roots were taken in a soil profile. The profile area analyzed was defined by iron rings, which were also used to remove the roots to determine their dry weight. The roots presented in the pictures were quantified using SIARCS software developed by Embrapa. The root length and area determined by digital images provided a good estimate of root quantity present in the profile.
Resumo:
The aim of this work is to present a new concept, called on-line desorption of dried blood spots (on-line DBS), allowing the direct analysis of a dried blood spot coupled to liquid chromatography mass spectrometry device (LC/MS). The system is based on an inox cell which can receive a blood sample (10 microL) previously spotted on a filter paper. The cell is then integrated into LC/MS system where the analytes are desorbed out of the paper towards a column switching system ensuring the purification and separation of the compounds before their detection on a single quadrupole MS coupled to atmospheric pressure chemical ionisation (APCI) source. The described procedure implies that no pretreatment is necessary in spite the analysis is based on whole blood sample. To ensure the applicability of the concept, saquinavir, imipramine, and verapamil were chosen. Despite the use of a small sampling volume and a single quadrupole detector, on-line DBS allowed the analyses of these three compounds over their therapeutic concentrations from 50 to 500 ng/mL for imipramine and verapamil and from 100 to 1000 ng/mL for saquinavir. Moreover, the method showed good repeatability with relative standard deviation (RSD) lower than 15% based on two levels of concentration (low and high). Function responses were found to be linear over the therapeutic concentration for each compound and were used to determine the concentrations of real patient samples for saquinavir. Comparison of the founded values with those of a validated method used routinely in a reference laboratory showed a good correlation between the two methods. Moreover, good selectivity was observed ensuring that no endogenous or chemical components interfered with the quantitation of the analytes. This work demonstrates the feasibility and applicability of the on-line DBS procedure for bioanalysis.
Resumo:
The SHRP Modified Georgia Digital Faultmeter was loaned to the Iowa Department of Transportation in January 1993 for evaluation. A study was undertaken comparing the faultmeter to Iowa's current method of fault measurement. The following conclusions were made after comparing the faultmeter to Iowa's gauge: The faultmeter was lighter and easier to maneuver and position. The faultmeter's direct readout was quicker to read. The faultmeter has increased precision. The faultmeter gave consistently lower fault readings than the Iowa gauge.
Resumo:
This paper proposes an automatic hand detection system that combines the Fourier-Mellin Transform along with other computer vision techniques to achieve hand detection in cluttered scene color images. The proposed system uses the Fourier-Mellin Transform as an invariant feature extractor to perform RST invariant hand detection. In a first stage of the system a simple non-adaptive skin color-based image segmentation and an interest point detector based on corners are used in order to identify regions of interest that contains possible matches. A sliding window algorithm is then used to scan the image at different scales performing the FMT calculations only in the previously detected regions of interest and comparing the extracted FM descriptor of the windows with a hand descriptors database obtained from a train image set. The results of the performed experiments suggest the use of Fourier-Mellin invariant features as a promising approach for automatic hand detection.
Resumo:
This paper proposes an automatic hand detection system that combines the Fourier-Mellin Transform along with other computer vision techniques to achieve hand detection in cluttered scene color images. The proposed system uses the Fourier-Mellin Transform as an invariant feature extractor to perform RST invariant hand detection. In a first stage of the system a simple non-adaptive skin color-based image segmentation and an interest point detector based on corners are used in order to identify regions of interest that contains possible matches. A sliding window algorithm is then used to scan the image at different scales performing the FMT calculations only in the previously detected regions of interest and comparing the extracted FM descriptor of the windows with a hand descriptors database obtained from a train image set. The results of the performed experiments suggest the use of Fourier-Mellin invariant features as a promising approach for automatic hand detection.
Resumo:
Direct MR arthrography has a better diagnostic accuracy than MR imaging alone. However, contrast material is not always homogeneously distributed in the articular space. Lesions of cartilage surfaces or intra-articular soft tissues can thus be misdiagnosed. Concomitant application of axial traction during MR arthrography leads to articular distraction. This enables better distribution of contrast material in the joint and better delineation of intra-articular structures. Therefore, this technique improves detection of cartilage lesions. Moreover, the axial stress applied on articular structures may reveal lesions invisible on MR images without traction. Based on our clinical experience, we believe that this relatively unknown technique is promising and should be further developed.
Resumo:
Abstract
Resumo:
This article reports on a lossless data hiding scheme for digital images where the data hiding capacity is either determined by minimum acceptable subjective quality or by the demanded capacity. In the proposed method data is hidden within the image prediction errors, where the most well-known prediction algorithms such as the median edge detector (MED), gradient adjacent prediction (GAP) and Jiang prediction are tested for this purpose. In this method, first the histogram of the prediction errors of images are computed and then based on the required capacity or desired image quality, the prediction error values of frequencies larger than this capacity are shifted. The empty space created by such a shift is used for embedding the data. Experimental results show distinct superiority of the image prediction error histogram over the conventional image histogram itself, due to much narrower spectrum of the former over the latter. We have also devised an adaptive method for hiding data, where subjective quality is traded for data hiding capacity. Here the positive and negative error values are chosen such that the sum of their frequencies on the histogram is just above the given capacity or above a certain quality.
Resumo:
This letter presents a lossless data hiding scheme for digital images which uses an edge detector to locate plain areas for embedding. The proposed method takes advantage of the well-known gradient adjacent prediction utilized in image coding. In the suggested scheme, prediction errors and edge values are first computed and then, excluding the edge pixels, prediction error values are slightly modified through shifting the prediction errors to embed data. The aim of proposed scheme is to decrease the amount of modified pixels to improve transparency by keeping edge pixel values of the image. The experimental results have demonstrated that the proposed method is capable of hiding more secret data than the known techniques at the same PSNR, thus proving that using edge detector to locate plain areas for lossless data embedding can enhance the performance in terms of data embedding rate versus the PSNR of marked images with respect to original image.