912 resultados para Image analysis method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction : Doublecortin (DCX) is a microtubule associated protein expressed by migrating neural precursors. DCX is also expressed in approximately 4% of all cortical cells in adult normal primate brain. DCX expression is also enhanced locally in response to an acute insult made to the brain. This is thought to play a role in plasticity or neural repair. That being said, it would be interesting to know how the expression of DCX is modified in a more chronic insult, like in neurodegeneration such as in Parkinson's Disease (PD) and Alzheimer's Disease (AD). The aim of my study is to study the expression of DCX cells in the cortex of patients having a neurodegenerative disease, compared to control patients. Method: DCX cells quantification on 9 DCX‐stained 5 μm thick formalin fixed paraffin embedded brain sections: 3 Alzheimer's disease patients, 3 Parkinson's disease patients and 3 control patients. Each patient had several sections that we could stain with different stainings (GALLYA, TAU, DCX). By using a computerized image analysis system (Explora Nova, La Rochelle, France), cortical columns were selected on areas on the cortex with a lot of degeneration subjectively observed on GALLYA stained sections and on TAU stained sections. Then total number of cells was counted on TAU sections, where all nuclei were colored in blue. Then the DCX cells were counted on the corresponding DCX sections. These values were standardized to a reference surface area. The ratio of DCX cells over total cells was then calculated. Results : There is a difference of DCX cell expression between Alzheimer's Disease patients and control patients. The percentage of dcx cells in the cortex of an Alzheimer's patient is around 12.54% ± 2.17%, where as in the cortex of control patients, it is around 5.47% ± 0.83%. On the other hand, there is no significant difference in the ratio of DCX cells over total cells between parkinson's patients and control patients, both having around 5% of DCX cells. Discussion: There is a dramatic increase of DCX expression in AD (12.5%) compared to PD and controls (5.5%). The increase in DCX ratio in AD may have two potential causes: 1.The increased ratio is due to DCX cells being more resistant to degeneration compared to surrounding cells which are degenerating due to AD, leading to the cortical atrophy observed in AD patients. So the decrease of total cells without any change in the number of DCX cells makes the ratio bigger in AD compared to the controls. 2.The increased ratio is due to an actual increase in DCX cells. This means that there is some neural repair to compensate the degenerative process, just like the repair process observed in acute lesions to the brain. This second idea can be integrated in the broader point of view of neuroinflammation. The progression of the disease would trigger neuroinflammation and the process following the primary inflammatory response which is neural repair. So our study can show that the increase in DCX cells is an attempt to repair the degenerated neurons, in the context of neuroinflammation triggered by the physiopathological progression of the disease.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Inference of Markov random field images segmentation models is usually performed using iterative methods which adapt the well-known expectation-maximization (EM) algorithm for independent mixture models. However, some of these adaptations are ad hoc and may turn out numerically unstable. In this paper, we review three EM-like variants for Markov random field segmentation and compare their convergence properties both at the theoretical and practical levels. We specifically advocate a numerical scheme involving asynchronous voxel updating, for which general convergence results can be established. Our experiments on brain tissue classification in magnetic resonance images provide evidence that this algorithm may achieve significantly faster convergence than its competitors while yielding at least as good segmentation results.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Image filtering is a highly demanded approach of image enhancement in digital imaging systems design. It is widely used in television and camera design technologies to improve the quality of an output image to avoid various problems such as image blurring problem thatgains importance in design of displays of large sizes and design of digital cameras. This thesis proposes a new image filtering method basedon visual characteristics of human eye such as MTF. In contrast to the traditional filtering methods based on human visual characteristics this thesis takes into account the anisotropy of the human eye vision. The proposed method is based on laboratory measurements of the human eye MTF and takes into account degradation of the image by the latter. This method improves an image in the way it will be degraded by human eye MTF to give perception of the original image quality. This thesis gives a basic understanding of an image filtering approach and the concept of MTF and describes an algorithm to perform an image enhancement based on MTF of human eye. Performed experiments have shown quite good results according to human evaluation. Suggestions to improve the algorithm are also given for the future improvements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An experiment was carried out to determine the root distribution of four grapevine rootstocks (Salt Creek, Dogridge, Courdec 1613, IAC 572) in a coarse texture soil of a commercial growing area in Petrolina County, São Francisco Valley, Brazil. Rootstocks were grafted to a seedless table grape cv. Festival, and irrigated by microsprinkler. Roots were quantified by the trench wall method aided by digital image analysis. Results indicated that roots reached 1 m depth, but few differences among rootstocks were found. All of them presented at least 90 % of the roots distributed until 0.6 m depth, with a greater root presence in the first 0.4 m. The upper 0.6 m can be taken into account as the effective rooting depth for soil and water management.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of industrial crystallization is to obtain a crystalline product which has the desired crystal size distribution, mean crystal size, crystal shape, purity, polymorphic and pseudopolymorphic form. Effective control of the product quality requires an understanding of the thermodynamics of the crystallizing system and the effects of operation parameters on the crystalline product properties. Therefore, obtaining reliable in-line information about crystal properties and supersaturation, which is the driving force of crystallization, would be very advantageous. Advanced techniques, such asRaman spectroscopy, attenuated total reflection Fourier transform infrared (ATR FTIR) spectroscopy, and in-line imaging techniques, offer great potential for obtaining reliable information during crystallization, and thus giving a better understanding of the fundamental mechanisms (nucleation and crystal growth) involved. In the present work, the relative stability of anhydrate and dihydrate carbamazepine in mixed solvents containing water and ethanol were investigated. The kinetics of the solvent mediated phase transformation of the anhydrate to hydrate in the mixed solvents was studied using an in-line Raman immersion probe. The effects of the operation parameters in terms of solvent composition, temperature and the use of certain additives on the phase transformation kineticswere explored. Comparison of the off-line measured solute concentration and the solid-phase composition measured by in-line Raman spectroscopy allowedthe identification of the fundamental processes during the phase transformation. The effects of thermodynamic and kinetic factors on the anhydrate/hydrate phase of carbamazepine crystals during cooling crystallization were also investigated. The effect of certain additives on the batch cooling crystallization of potassium dihydrogen phosphate (KDP) wasinvestigated. The crystal growth rate of a certain crystal face was determined from images taken with an in-line video microscope. An in-line image processing method was developed to characterize the size and shape of thecrystals. An ATR FTIR and a laser reflection particle size analyzer were used to study the effects of cooling modes and seeding parameters onthe final crystal size distribution of an organic compound C15. Based on the obtained results, an operation condition was proposed which gives improved product property in terms of increased mean crystal size and narrowersize distribution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to establish guidelines for irrigation water management of banana cv. Pacovan (AAB group, Prata sub-group) in Petrolina County, northeastern Brazil, the root distribution and activity were measured on an irrigated plantation, in a medium texture soil, with plants spaced in a 3 x 3 m grid. Root distribution was evaluated by the soil profile method aided by digital image analysis, while root activity was indirectly determined by the changing of soil water content and by the direction of soil water flux. Data were collected since planting in January 1999 to the 3rd harvest in September 2001. Effective rooting depth increased from 0.4 m at 91 days after planting (dap), to 0.6 m at 370, 510, and 903 dap, while water absorption by roots was predominantly in the top 0,6 m.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introduction. Genetic epidemiology is focused on the study of the genetic causes that determine health and diseases in populations. To achieve this goal a common strategy is to explore differences in genetic variability between diseased and nondiseased individuals. Usual markers of genetic variability are single nucleotide polymorphisms (SNPs) which are changes in just one base in the genome. The usual statistical approach in genetic epidemiology study is a marginal analysis, where each SNP is analyzed separately for association with the phenotype. Motivation. It has been observed, that for common diseases the single-SNP analysis is not very powerful for detecting genetic causing variants. In this work, we consider Gene Set Analysis (GSA) as an alternative to standard marginal association approaches. GSA aims to assess the overall association of a set of genetic variants with a phenotype and has the potential to detect subtle effects of variants in a gene or a pathway that might be missed when assessed individually. Objective. We present a new optimized implementation of a pair of gene set analysis methodologies for analyze the individual evidence of SNPs in biological pathways. We perform a simulation study for exploring the power of the proposed methodologies in a set of scenarios with different number of causal SNPs under different effect sizes. In addition, we compare the results with the usual single-SNP analysis method. Moreover, we show the advantage of using the proposed gene set approaches in the context of an Alzheimer disease case-control study where we explore the Reelin signal pathway.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Cherenkov light flashes produced by Extensive Air Showers are very short in time. A high bandwidth and fast digitizing readout, therefore, can minimize the influence of the background from the light of the night sky, and improve the performance in Cherenkov telescopes. The time structure of the Cherenkov image can further be used in single-dish Cherenkov telescopes as an additional parameter to reduce the background from unwanted hadronic showers. A description of an analysis method which makes use of the time information and the subsequent improvement on the performance of the MAGIC telescope (especially after the upgrade with an ultra fast 2 GSamples/s digitization system in February 2007) will be presented. The use of timing information in the analysis of the new MAGIC data reduces the background by a factor two, which in turn results in an enhancement of about a factor 1.4 of the flux sensitivity to point-like sources, as tested on observations of the Crab Nebula.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Chromogenic immunohistochemistry (IHC) is omnipresent in cancer diagnosis, but has also been criticized for its technical limit in quantifying the level of protein expression on tissue sections, thus potentially masking clinically relevant data. Shifting from qualitative to quantitative, immunofluorescence (IF) has recently gained attention, yet the question of how precisely IF can quantify antigen expression remains unanswered, regarding in particular its technical limitations and applicability to multiple markers. Here we introduce microfluidic precision IF, which accurately quantifies the target expression level in a continuous scale based on microfluidic IF staining of standard tissue sections and low-complexity automated image analysis. We show that the level of HER2 protein expression, as continuously quantified using microfluidic precision IF in 25 breast cancer cases, including several cases with equivocal IHC result, can predict the number of HER2 gene copies as assessed by fluorescence in situ hybridization (FISH). Finally, we demonstrate that the working principle of this technology is not restricted to HER2 but can be extended to other biomarkers. We anticipate that our method has the potential of providing automated, fast and high-quality quantitative in situ biomarker data using low-cost immunofluorescence assays, as increasingly required in the era of individually tailored cancer therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An unsupervised approach to image segmentation which fuses region and boundary information is presented. The proposed approach takes advantage of the combined use of 3 different strategies: the guidance of seed placement, the control of decision criterion, and the boundary refinement. The new algorithm uses the boundary information to initialize a set of active regions which compete for the pixels in order to segment the whole image. The method is implemented on a multiresolution representation which ensures noise robustness as well as computation efficiency. The accuracy of the segmentation results has been proven through an objective comparative evaluation of the method

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work we study the classification of forest types using mathematics based image analysis on satellite data. We are interested in improving classification of forest segments when a combination of information from two or more different satellites is used. The experimental part is based on real satellite data originating from Canada. This thesis gives summary of the mathematics basics of the image analysis and supervised learning , methods that are used in the classification algorithm. Three data sets and four feature sets were investigated in this thesis. The considered feature sets were 1) histograms (quantiles) 2) variance 3) skewness and 4) kurtosis. Good overall performances were achieved when a combination of ASTERBAND and RADARSAT2 data sets was used.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multivariate Curve Resolution with Alternating Least Squares (MCR-ALS) is a resolution method that has been efficiently applied in many different fields, such as process analysis, environmental data and, more recently, hyperspectral image analysis. When applied to second order data (or to three-way data) arrays, recovery of the underlying basis vectors in both measurement orders (i.e. signal and concentration orders) from the data matrix can be achieved without ambiguities if the trilinear model constraint is considered during the ALS optimization. This work summarizes different protocols of MCR-ALS application, presenting a case study: near-infrared image spectroscopy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In general, laboratory activities are costly in terms of time, space, and money. As such, the ability to provide realistically simulated laboratory data that enables students to practice data analysis techniques as a complementary activity would be expected to reduce these costs while opening up very interesting possibilities. In the present work, a novel methodology is presented for design of analytical chemistry instrumental analysis exercises that can be automatically personalized for each student and the results evaluated immediately. The proposed system provides each student with a different set of experimental data generated randomly while satisfying a set of constraints, rather than using data obtained from actual laboratory work. This allows the instructor to provide students with a set of practical problems to complement their regular laboratory work along with the corresponding feedback provided by the system's automatic evaluation process. To this end, the Goodle Grading Management System (GMS), an innovative web-based educational tool for automating the collection and assessment of practical exercises for engineering and scientific courses, was developed. The proposed methodology takes full advantage of the Goodle GMS fusion code architecture. The design of a particular exercise is provided ad hoc by the instructor and requires basic Matlab knowledge. The system has been employed with satisfactory results in several university courses. To demonstrate the automatic evaluation process, three exercises are presented in detail. The first exercise involves a linear regression analysis of data and the calculation of the quality parameters of an instrumental analysis method. The second and third exercises address two different comparison tests, a comparison test of the mean and a t-paired test.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.