960 resultados para Automatic Gridding of microarray images


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spectroscopic and photometric observations in a 6 arcmin x 6 arcmin field centered on the rich cluster of galaxies Abell 2390 are presented. The photometry concerns 700 objects and the spectroscopy 72 objects. The redshift survey shows that the mean redshift of the cluster is 0.232. An original method for automatic determination of the spectral type of galaxies is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The integrity and function of neurons depend on their continuous interactions with glial cells. In the peripheral nervous system glial functions are exerted by Schwann cells (SCs). SCs sense synaptic and extrasynaptic manifestations of action potential propagation and adapt their physiology to support neuronal activity. We review here existing literature data on extrasynaptic bidirectional axon-SC communication, focusing particularly on neuronal activity implications. To shed light on underlying mechanisms, we conduct a thorough analysis of microarray data from SC-rich mouse sciatic nerve at different developmental stages and in neuropathic models. We identify molecules that are potentially involved in SC detection of neuronal activity signals inducing subsequent glial responses. We further suggest that alterations in the activity-dependent axon-SC crosstalk impact on peripheral neuropathies. Together with previously reported data, these observations open new perspectives for deciphering glial mechanisms of neuronal function support.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the types of remote sensing acquisitions, optical images are certainly one of the most widely relied upon data sources for Earth observation. They provide detailed measurements of the electromagnetic radiation reflected or emitted by each pixel in the scene. Through a process termed supervised land-cover classification, this allows to automatically yet accurately distinguish objects at the surface of our planet. In this respect, when producing a land-cover map of the surveyed area, the availability of training examples representative of each thematic class is crucial for the success of the classification procedure. However, in real applications, due to several constraints on the sample collection process, labeled pixels are usually scarce. When analyzing an image for which those key samples are unavailable, a viable solution consists in resorting to the ground truth data of other previously acquired images. This option is attractive but several factors such as atmospheric, ground and acquisition conditions can cause radiometric differences between the images, hindering therefore the transfer of knowledge from one image to another. The goal of this Thesis is to supply remote sensing image analysts with suitable processing techniques to ensure a robust portability of the classification models across different images. The ultimate purpose is to map the land-cover classes over large spatial and temporal extents with minimal ground information. To overcome, or simply quantify, the observed shifts in the statistical distribution of the spectra of the materials, we study four approaches issued from the field of machine learning. First, we propose a strategy to intelligently sample the image of interest to collect the labels only in correspondence of the most useful pixels. This iterative routine is based on a constant evaluation of the pertinence to the new image of the initial training data actually belonging to a different image. Second, an approach to reduce the radiometric differences among the images by projecting the respective pixels in a common new data space is presented. We analyze a kernel-based feature extraction framework suited for such problems, showing that, after this relative normalization, the cross-image generalization abilities of a classifier are highly increased. Third, we test a new data-driven measure of distance between probability distributions to assess the distortions caused by differences in the acquisition geometry affecting series of multi-angle images. Also, we gauge the portability of classification models through the sequences. In both exercises, the efficacy of classic physically- and statistically-based normalization methods is discussed. Finally, we explore a new family of approaches based on sparse representations of the samples to reciprocally convert the data space of two images. The projection function bridging the images allows a synthesis of new pixels with more similar characteristics ultimately facilitating the land-cover mapping across images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: Surgical decision making in lumbar spinal stenosis (LSS) takes into account primarily clinical symptoms as well as concordant radiological findings. We hypothesized that a wide variation of operative threshold would be found in particular as far as judgment of severity of radiological stenosis is concerned. Patients and methods: The number of surgeons who would proceed to decompression was studied relative to the perceived severity of radiological stenosis based either on measurements of dural sac cross sectional area (DSCA) or on the recently described morphological grading as seen on axial T2 MRI images. A link to an electronic survey page with a set of ten axial T2 MRI images taken from ten patients with either low back pain or LSS were sent to members of three national or international spine societies. Those 10 images were randomly presented initially and re-shuffled on a second page including this time DSCA measurements in mm2, ranging from 14 to 226 mm2, giving a total of 20 images to appraise. Morphological grades were ranging from grade A to D. Surgeons were asked if they would consider decompression given the radiological appearance of stenosis and that symptoms of neurological claudication were severe in patients who were otherwise fit for surgery. Fisher's exact test was performed following dichotomization of data when appropriate. Results: A total of 142 spine surgeons (113 orthopedic spine surgeons, 29 neurosurgeons) responded from 25 countries. A substantial agreement was observed in operating patients with severe (grade C) or extreme (grade D) stenosis as defined by the morphological grade compared to lesser stenosis (A&B) grades (p<0.0001). Decision to operate was not dependent on number of years in practice, medical density in practicing country or specialty although more neurosurgeons would operate on grade C stenosis (p<0.005). Disclosing the DSCA measurement did not alter the decision to operate. Although 20 surgeons only had prior knowledge of the description of the morphological grading, their responses showed no statistically significant difference with those of the remaining 122 physicians. Conclusions: This study showed that surgeons across borders are less influenced by DSCA in their decision making than by the morphological appearance of the dural sac. Classifying LSS according to morphology rather than surface measurements appears to be consistent with current clinical practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science- Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this article is to show how a contemporary playwright thinks once more of the Platonic image of the cave in order to reflect on the necessary existential journey of men and women as in the case of a Bildungsroman. Sooner or later men and women must abandon the protection that any sort of cavern such as home, the family garden or family itself can offer. In spite of writing from a by no means idealistic or metaphysical point of view, thanks to R. Sirera and to the very applicability of Platonic images, Plato becomes once again a classical reference which is both useful and even unavoidable if one bears in mind the Platonic origin of all the literary caverns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The expansion dynamics of the ablation plume generated by KrF laser irradiation of hydroxyapatite targets in a 0.1 mbar water atmosphere has been studied by fast intensified charge coupled device imaging with the aid of optical bandpass filters. The aim of the filters is to isolate the emission of a single species, which allows separate analysis of its expansion. Images obtained without a filter revealed two emissive components in the plume, which expand at different velocities for delay times of up to 1.1 ¿s. The dynamics of the first component is similar to that of a spherical shock wave, whereas the second component, smaller than the first, expands at constant velocity. Images obtained through a 520 nm filter show that the luminous intensity distribution and evolution of emissive atomic calcium is almost identical to those of the first component of the total emission and that there is no contribution from this species to the emission from the second component of the plume. The analysis through a 780 nm filter reveals that atomic oxygen partially diffuses into the water atmosphere and that there is a contribution from this species to the emission from the second component. The last species studied here, calcium oxide, was analyzed by means of a 600 nm filter. The images revealed an intensity pattern more complex than those from the atomic species. Calcium oxide also contributes to the emission from the second component. Finally, all the experiments were repeated in a Ne atmosphere. Comparison of the images revealed chemical reactions between the first component of the plume and the water atmosphere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Real-world images are complex objects, difficult to describe but at the same time possessing a high degree of redundancy. A very recent study [1] on the statistical properties of natural images reveals that natural images can be viewed through different partitions which are essentially fractal in nature. One particular fractal component, related to the most singular (sharpest) transitions in the image, seems to be highly informative about the whole scene. In this paper we will show how to decompose the image into their fractal components.We will see that the most singular component is related to (but not coincident with) the edges of the objects present in the scenes. We will propose a new, simple method to reconstruct the image with information contained in that most informative component.We will see that the quality of the reconstruction is strongly dependent on the capability to extract the relevant edges in the determination of the most singular set.We will discuss the results from the perspective of coding, proposing this method as a starting point for future developments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This case-control study assessed whether the trabecular bone score (TBS), determined from gray-level analysis of DXA images, might be of any diagnostic value, either alone or combined with bone mineral density (BMD), in the assessment of vertebral fracture risk among postmenopausal women with osteopenia. Of 243 postmenopausal Caucasian women, 50-80 years old, with BMD T-scores between -1.0 and -2.5, we identified 81 with osteoporosis-related vertebral fractures and compared them with 162 age-matched controls without fractures. Primary outcomes were BMD and TBS. For BMD, each incremental decrease in BMD was associated with an OR = 1.54 (95% CI = 1.17-2.03), and the AUC was 0.614 (0.550-0.676). For TBS, corresponding values were 2.53 (1.82-3.53) and 0.721 (0.660-0.777). The difference in the AUC for TBS vs. BMD was statistically significant (p = 0.020). The OR for (TBS + BMD) was 2.54 (1.86-3.47) and the AUC 0.732 (0.672-0.787). In conclusion, the TBS warrants a closer look to see whether it may be of clinical usefulness in the determination of fracture risk in postmenopausal osteopenic women.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research reported in this series of article aimed at (1) automating the search of questioned ink specimens in ink reference collections and (2) at evaluating the strength of ink evidence in a transparent and balanced manner. These aims require that ink samples are analysed in an accurate and reproducible way and that they are compared in an objective and automated way. This latter requirement is due to the large number of comparisons that are necessary in both scenarios. A research programme was designed to (a) develop a standard methodology for analysing ink samples in a reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in forensic contexts. This report focuses on the last of the three stages of the research programme. The calibration and acquisition process and the mathematical comparison algorithms were described in previous papers [C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part I: Development of a quality assurance process for forensic ink analysis by HPTLC, Forensic Sci. Int. 185 (2009) 29-37; C. Neumann, P. Margot, New perspectives in the use of ink evidence in forensic science-Part II: Development and testing of mathematical algorithms for the automatic comparison of ink samples analysed by HPTLC, Forensic Sci. Int. 185 (2009) 38-50]. In this paper, the benefits and challenges of the proposed concepts are tested in two forensic contexts: (1) ink identification and (2) ink evidential value assessment. The results show that different algorithms are better suited for different tasks. This research shows that it is possible to build digital ink libraries using the most commonly used ink analytical technique, i.e. high-performance thin layer chromatography, despite its reputation of lacking reproducibility. More importantly, it is possible to assign evidential value to ink evidence in a transparent way using a probabilistic model. It is therefore possible to move away from the traditional subjective approach, which is entirely based on experts' opinion, and which is usually not very informative. While there is room for the improvement, this report demonstrates the significant gains obtained over the traditional subjective approach for the search of ink specimens in ink databases, and the interpretation of their evidential value.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Ocular anatomy and radiation-associated toxicities provide unique challenges for external beam radiation therapy. For treatment planning, precise modeling of organs at risk and tumor volume are crucial. Development of a precise eye model and automatic adaptation of this model to patients' anatomy remain problematic because of organ shape variability. This work introduces the application of a 3-dimensional (3D) statistical shape model as a novel method for precise eye modeling for external beam radiation therapy of intraocular tumors. METHODS AND MATERIALS: Manual and automatic segmentations were compared for 17 patients, based on head computed tomography (CT) volume scans. A 3D statistical shape model of the cornea, lens, and sclera as well as of the optic disc position was developed. Furthermore, an active shape model was built to enable automatic fitting of the eye model to CT slice stacks. Cross-validation was performed based on leave-one-out tests for all training shapes by measuring dice coefficients and mean segmentation errors between automatic segmentation and manual segmentation by an expert. RESULTS: Cross-validation revealed a dice similarity of 95% ± 2% for the sclera and cornea and 91% ± 2% for the lens. Overall, mean segmentation error was found to be 0.3 ± 0.1 mm. Average segmentation time was 14 ± 2 s on a standard personal computer. CONCLUSIONS: Our results show that the solution presented outperforms state-of-the-art methods in terms of accuracy, reliability, and robustness. Moreover, the eye model shape as well as its variability is learned from a training set rather than by making shape assumptions (eg, as with the spherical or elliptical model). Therefore, the model appears to be capable of modeling nonspherically and nonelliptically shaped eyes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this work is to develop a method to objectively compare the performance of a digital and a screen-film mammography system in terms of image quality. The method takes into account the dynamic range of the image detector, the detection of high and low contrast structures, the visualisation of the images and the observer response. A test object, designed to represent a compressed breast, was constructed from various tissue equivalent materials ranging from purely adipose to purely glandular composition. Different areas within the test object permitted the evaluation of low and high contrast detection, spatial resolution and image noise. All the images (digital and conventional) were captured using a CCD camera to include the visualisation process in the image quality assessment. A mathematical model observer (non-prewhitening matched filter), that calculates the detectability of high and low contrast structures using spatial resolution, noise and contrast, was used to compare the two technologies. Our results show that for a given patient dose, the detection of high and low contrast structures is significantly better for the digital system than for the conventional screen-film system studied. The method of using a test object with a large tissue composition range combined with a camera to compare conventional and digital imaging modalities can be applied to other radiological imaging techniques. In particular it could be used to optimise the process of radiographic reading of soft copy images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A joint project between the Paul Scherrer Institut (PSI) and the Institute of Radiation Physics was initiated to characterise the PSI whole body counter in detail through measurements and Monte Carlo simulation. Accurate knowledge of the detector geometry is essential for reliable simulations of human body phantoms filled with known activity concentrations. Unfortunately, the technical drawings provided by the manufacturer are often not detailed enough and sometimes the specifications do not agree with the actual set-up. Therefore, the exact detector geometry and the position of the detector crystal inside the housing were determined through radiographic images. X-rays were used to analyse the structure of the detector, and (60)Co radiography was employed to measure the core of the germanium crystal. Moreover, the precise axial alignment of the detector within its housing was determined through a series of radiographic images with different incident angles. The hence obtained information enables us to optimise the Monte Carlo geometry model and to perform much more accurate and reliable simulations.