72 resultados para Segmentation hépatique


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tissue microarray (TMA) is a high throughput analysis tool to identify new diagnostic and prognostic markers in human cancers. However, standard automated method in tumour detection on both routine histochemical and immunohistochemistry (IHC) images is under developed. This paper presents a robust automated tumour cell segmentation model which can be applied to both routine histochemical tissue slides and IHC slides and deal with finer pixel-based segmentation in comparison with blob or area based segmentation by existing approaches. The presented technique greatly improves the process of TMA construction and plays an important role in automated IHC quantification in biomarker analysis where excluding stroma areas is critical. With the finest pixel-based evaluation (instead of area-based or object-based), the experimental results show that the proposed method is able to achieve 80% accuracy and 78% accuracy in two different types of pathological virtual slides, i.e., routine histochemical H&E and IHC images, respectively. The presented technique greatly reduces labor-intensive workloads for pathologists and highly speeds up the process of TMA construction and provides a possibility for fully automated IHC quantification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a Statistical Shape Model for Human Figure Segmentation in gait sequences. Point Distribution Models (PDM) generally use Principal Component analysis (PCA) to describe the main directions of variation in the training set. However, PCA assumes a number of restrictions on the data that do not always hold. In this work, we explore the potential of Independent Component Analysis (ICA) as an alternative shape decomposition to the PDM-based Human Figure Segmentation. The shape model obtained enables accurate estimation of human figures despite segmentation errors in the input silhouettes and has really good convergence qualities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a multi-camera application capable of processing high resolution images and extracting features based on colors patterns over graphic processing units (GPU). The goal is to work in real time under the uncontrolled environment of a sport event like a football match. Since football players are composed for diverse and complex color patterns, a Gaussian Mixture Models (GMM) is applied as segmentation paradigm, in order to analyze sport live images and video. Optimization techniques have also been applied over the C++ implementation using profiling tools focused on high performance. Time consuming tasks were implemented over NVIDIA's CUDA platform, and later restructured and enhanced, speeding up the whole process significantly. Our resulting code is around 4-11 times faster on a low cost GPU than a highly optimized C++ version on a central processing unit (CPU) over the same data. Real time has been obtained processing until 64 frames per second. An important conclusion derived from our study is the scalability of the application to the number of cores on the GPU. © 2011 Springer-Verlag.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, gradient vector flow (GVF) based algorithms have been successfully used to segment a variety of 2-D and 3-D imagery. However, due to the compromise of internal and external energy forces within the resulting partial differential equations, these methods may lead to biased segmentation results. In this paper, we propose MSGVF, a mean shift based GVF segmentation algorithm that can successfully locate the correct borders. MSGVF is developed so that when the contour reaches equilibrium, the various forces resulting from the different energy terms are balanced. In addition, the smoothness constraint of image pixels is kept so that over- or under-segmentation can be reduced. Experimental results on publicly accessible datasets of dermoscopic and optic disc images demonstrate that the proposed method effectively detects the borders of the objects of interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been argued that the variation in brain activity that occurs when observing another person reflects a representation of actions that is indivisible, and which plays out in full once the intent of the actor can be discerned. We used transcranial magnetic stimulation to probe the excitability of corticospinal projections to 2 intrinsic hand muscles while motions to reach and grasp an object were observed. A symbolic cue either faithfully indicated the required final orientation of the object and thus the nature of the grasp that was required, or was in conflict with the movement subsequently displayed. When the cue was veridical, modulation of excitability was in accordance with the functional role of the muscles in the action observed. If however the cue had indicated that the alternative grasp would be required, modulation of output to first dorsal interosseus was consistent with the action specified, rather than the action observed-until the terminal phase of the motion sequence during which the object was seen lifted. Modulation of corticospinal output during observation is thus segmented-it progresses initially in accordance with the action anticipated, and if discrepancies are revealed by visual input, coincides thereafter with that of the action seen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Life science research aims to continuously improve the quality and standard of human life. One of the major challenges in this area is to maintain food safety and security. A number of image processing techniques have been used to investigate the quality of food products. In this paper,we propose a new algorithm to effectively segment connected grains so that each of them can be inspected in a later processing stage. One family of the existing segmentation methods is based on the idea of watersheding, and it has shown promising results in practice.However,due to the over-segmentation issue,this technique has experienced poor performance in various applications,such as inhomogeneous background and connected targets. To solve this problem,we present a combination of two classical techniques to handle this issue.In the first step,a mean shift filter is used to eliminate the inhomogeneous background, where entropy is used to be a converging criterion. Secondly,a color gradient algorithm is used in order to detect the most significant edges, and a marked watershed transform is applied to segment cluttered objects out of the previous processing stages. The proposed framework is capable of compromising among execution time, usability, efficiency and segmentation outcome in analyzing ring die pellets. The experimental results demonstrate that the proposed approach is effectiveness and robust.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of segmenting text documents that have a
two-part structure such as a problem part and a solution part. Documents
of this genre include incident reports that typically involve
description of events relating to a problem followed by those pertaining
to the solution that was tried. Segmenting such documents
into the component two parts would render them usable in knowledge
reuse frameworks such as Case-Based Reasoning. This segmentation
problem presents a hard case for traditional text segmentation
due to the lexical inter-relatedness of the segments. We develop
a two-part segmentation technique that can harness a corpus
of similar documents to model the behavior of the two segments
and their inter-relatedness using language models and translation
models respectively. In particular, we use separate language models
for the problem and solution segment types, whereas the interrelatedness
between segment types is modeled using an IBM Model
1 translation model. We model documents as being generated starting
from the problem part that comprises of words sampled from
the problem language model, followed by the solution part whose
words are sampled either from the solution language model or from
a translation model conditioned on the words already chosen in the
problem part. We show, through an extensive set of experiments on
real-world data, that our approach outperforms the state-of-the-art
text segmentation algorithms in the accuracy of segmentation, and
that such improved accuracy translates well to improved usability
in Case-based Reasoning systems. We also analyze the robustness
of our technique to varying amounts and types of noise and empirically
illustrate that our technique is quite noise tolerant, and
degrades gracefully with increasing amounts of noise