36 resultados para Segmentation hépatique

em Universidad Politécnica de Madrid


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, vision-based advanced driver-assistance systems (ADAS) have received a new increased interest to enhance driving safety. In particular, due to its high performance–cost ratio, mono-camera systems are arising as the main focus of this field of work. In this paper we present a novel on-board road modeling and vehicle detection system, which is a part of the result of the European I-WAY project. The system relies on a robust estimation of the perspective of the scene, which adapts to the dynamics of the vehicle and generates a stabilized rectified image of the road plane. This rectified plane is used by a recursive Bayesian classi- fier, which classifies pixels as belonging to different classes corresponding to the elements of interest of the scenario. This stage works as an intermediate layer that isolates subsequent modules since it absorbs the inherent variability of the scene. The system has been tested on-road, in different scenarios, including varied illumination and adverse weather conditions, and the results have been proved to be remarkable even for such complex scenarios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advanced liver surgery requires a precise pre-operative planning, where liver segmentation and remnant liver volume are key elements to avoid post-operative liver failure. In that context, level-set algorithms have achieved better results than others, especially with altered liver parenchyma or in cases with previous surgery. In order to improve functional liver parenchyma volume measurements, in this work we propose two strategies to enhance previous level-set algorithms: an optimal multi-resolution strategy with fine details correction and adaptive curvature, as well as an additional semiautomatic step imposing local curvature constraints. Results show more accurate segmentations, especially in elongated structures, detecting internal lesions and avoiding leakages to close structures

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show a procedure for constructing a probabilistic atlas based on affine moment descriptors. It uses a normalization procedure over the labeled atlas. The proposed linear registration is defined by closed-form expressions involving only geometric moments. This procedure applies both to atlas construction as atlas-based segmentation. We model the likelihood term for each voxel and each label using parametric or nonparametric distributions and the prior term is determined by applying the vote-rule. The probabilistic atlas is built with the variability of our linear registration. We have two segmentation strategy: a) it applies the proposed affine registration to bring the target image into the coordinate frame of the atlas or b) the probabilistic atlas is non-rigidly aligning with the target image, where the probabilistic atlas is previously aligned to the target image with our affine registration. Finally, we adopt a graph cut - Bayesian framework for implementing the atlas-based segmentation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a level set based variational approach that incorporates shape priors into edge-based and region-based models. The evolution of the active contour depends on local and global information. It has been implemented using an efficient narrow band technique. For each boundary pixel we calculate its dynamic according to its gray level, the neighborhood and geometric properties established by training shapes. We also propose a criterion for shape aligning based on affine transformation using an image normalization procedure. Finally, we illustrate the benefits of the our approach on the liver segmentation from CT images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Industrial applications of computer vision sometimes require detection of atypical objects that occur as small groups of pixels in digital images. These objects are difficult to single out because they are small and randomly distributed. In this work we propose an image segmentation method using the novel Ant System-based Clustering Algorithm (ASCA). ASCA models the foraging behaviour of ants, which move through the data space searching for high data-density regions, and leave pheromone trails on their path. The pheromone map is used to identify the exact number of clusters, and assign the pixels to these clusters using the pheromone gradient. We applied ASCA to detection of microcalcifications in digital mammograms and compared its performance with state-of-the-art clustering algorithms such as 1D Self-Organizing Map, k-Means, Fuzzy c-Means and Possibilistic Fuzzy c-Means. The main advantage of ASCA is that the number of clusters needs not to be known a priori. The experimental results show that ASCA is more efficient than the other algorithms in detecting small clusters of atypical data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The image by Computed Tomography is a non-invasive alternative for observing soil structures, mainly pore space. The pore space correspond in soil data to empty or free space in the sense that no material is present there but only fluids, the fluid transport depend of pore spaces in soil, for this reason is important identify the regions that correspond to pore zones. In this paper we present a methodology in order to detect pore space and solid soil based on the synergy of the image processing, pattern recognition and artificial intelligence. The mathematical morphology is an image processing technique used for the purpose of image enhancement. In order to find pixels groups with a similar gray level intensity, or more or less homogeneous groups, a novel image sub-segmentation based on a Possibilistic Fuzzy c-Means (PFCM) clustering algorithm was used. The Artificial Neural Networks (ANNs) are very efficient for demanding large scale and generic pattern recognition applications for this reason finally a classifier based on artificial neural network is applied in order to classify soil images in two classes, pore space and solid soil respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Applying biometrics to daily scenarios involves demanding requirements in terms of software and hardware. On the contrary, current biometric techniques are also being adapted to present-day devices, like mobile phones, laptops and the like, which are far from meeting the previous stated requirements. In fact, achieving a combination of both necessities is one of the most difficult problems at present in biometrics. Therefore, this paper presents a segmentation algorithm able to provide suitable solutions in terms of precision for hand biometric recognition, considering a wide range of backgrounds like carpets, glass, grass, mud, pavement, plastic, tiles or wood. Results highlight that segmentation accuracy is carried out with high rates of precision (F-measure 88%)), presenting competitive time results when compared to state-of-the-art segmentation algorithms time performance

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New trends in biometrics are oriented to mobile devices in order to increase the overall security in daily actions like bank account access, e-commerce or even document protection within the mobile. However, applying biometrics to mobile devices imply challenging aspects in biometric data acquisition, feature extraction or private data storage. Concretely, this paper attempts to deal with the problem of hand segmentation given a picture of the hand in an unknown background, requiring an accurate result in terms of hand isolation. For the sake of user acceptability, no restrictions are done on background, and therefore, hand images can be taken without any constraint, resulting segmentation in an exigent task. Multiscale aggregation strategies are proposed in order to solve this problem due to their accurate results in unconstrained and complicated scenarios, together with their properties in time performance. This method is evaluated with a public synthetic database with 480000 images considering different backgrounds and illumination environments. The results obtained in terms of accuracy and time performance highlight their capability of being a suitable solution for the problem of hand segmentation in contact-less environments, outperforming competitive methods in literature like Lossy Data Compression image segmentation (LDC).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The synapses in the cerebral cortex can be classified into two main types, Gray’s type I and type II, which correspond to asymmetric (mostly glutamatergic excitatory) and symmetric (inhibitory GABAergic) synapses, respectively. Hence, the quantification and identification of their different types and the proportions in which they are found, is extraordinarily important in terms of brain function. The ideal approach to calculate the number of synapses per unit volume is to analyze 3D samples reconstructed from serial sections. However, obtaining serial sections by transmission electron microscopy is an extremely time consuming and technically demanding task. Using focused ion beam/scanning electron microscope microscopy, we recently showed that virtually all synapses can be accurately identified as asymmetric or symmetric synapses when they are visualized, reconstructed, and quantified from large 3D tissue samples obtained in an automated manner. Nevertheless, the analysis, segmentation, and quantification of synapses is still a labor intensive procedure. Thus, novel solutions are currently necessary to deal with the large volume of data that is being generated by automated 3D electron microscopy. Accordingly, we have developed ESPINA, a software tool that performs the automated segmentation and counting of synapses in a reconstructed 3D volume of the cerebral cortex, and that greatly facilitates and accelerates these processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One important issue emerging strongly in agriculture is related with the automatization of tasks, where the optical sensors play an important role. They provide images that must be conveniently processed. The most relevantimage processing procedures require the identification of green plants, in our experiments they come from barley and corn crops including weeds, so that some types of action can be carried out, including site-specific treatments with chemical products or mechanical manipulations. Also the identification of textures belonging to the soil could be useful to know some variables, such as humidity, smoothness or any others. Finally, from the point of view of the autonomous robot navigation, where the robot is equipped with the imaging system, some times it is convenient to know not only the soil information and the plants growing in the soil but also additional information supplied by global references based on specific areas. This implies that the images to be processed contain textures of three main types to be identified: green plants, soil and sky if any. This paper proposes a new automatic approach for segmenting these main textures and also to refine the identification of sub-textures inside the main ones. Concerning the green identification, we propose a new approach that exploits the performance of existing strategies by combining them. The combination takes into account the relevance of the information provided by each strategy based on the intensity variability. This makes an important contribution. The combination of thresholding approaches, for segmenting the soil and the sky, makes the second contribution; finally the adjusting of the supervised fuzzy clustering approach for identifying sub-textures automatically, makes the third finding. The performance of the method allows to verify its viability for automatic tasks in agriculture based on image processing

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Non-invasive quantitative assessment of the right ventricular anatomical and functional parameters is a challenging task. We present a semi-automatic approach for right ventricle (RV) segmentation from 4D MR images in two variants, which differ in the amount of user interaction. The method consists of three main phases: First, foreground and background markers are generated from the user input. Next, an over-segmented region image is obtained applying a watershed transform. Finally, these regions are merged using 4D graph-cuts with an intensity based boundary term. For the first variant the user outlines the inside of the RV wall in a few end-diastole slices, for the second two marker pixels serve as starting point for a statistical atlas application. Results were obtained by blind evaluation on 16 testing 4D MR volumes. They prove our method to be robust against markers location and place it favourably in the ranks of existing approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many studies investigating the aging brain or disease-induced brain alterations rely on accurate and reproducible brain tissue segmentation. Being a preliminary processing step prior to the segmentation, reliableskull-stripping the removal ofnon-brain tissue is also crucial for all later image assessment. Typically, segmentation algorithms rely on an atlas i.e. pre-segmented template data. Brain morphology, however, differs considerably depending on age, sex and race. In addition, diseased brains may deviate significantly from the atlas information typically gained from healthy volunteers. The imposed prior atlas information can thus lead to degradation of segmentation results. The recently introduced MP2RAGE sequence provides a bias-free T1 contrast with heavily reduced T2*- and PD-weighting compared to the standard MP-RAGE [1]. To this end, it acquires two image volumes at different inversion times in one acquisition, combining them to a uniform, i.e. homogenous image. In this work, we exploit the advantageous contrast properties of the MP2RAGE and combine it with a Dixon (i.e. fat-water separation) approach. The information gained by the additional fat image of the head considerably improves the skull-stripping outcome [2]. In conjunction with the pure T1 contrast of the MP2RAGE uniform image, we achieve robust skull-stripping and brain tissue segmentation without the use of an atlas

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En este proyecto se ha desarrollado un código de MATLAB para el procesamiento de imágenes tomográficas 3D, de muestras de asfalto de carreteras en Polonia. Estas imágenes en 3D han sido tomadas por un equipo de investigación de la Universidad Tecnológica de Lodz (LUT). El objetivo de este proyecto es crear una herramienta que se pueda utilizar para estudiar las diferentes muestras de asfalto 3D y pueda servir para estudiar las pruebas de estrés que experimentan las muestras en el laboratorio. Con el objetivo final de encontrar soluciones a la degradación sufrida en las carreteras de Polonia, debido a diferentes causas, como son las condiciones meteorológicas. La degradación de las carreteras es un tema que se ha investigado desde hace muchos años, debido a la fuerte degradación causada por diferentes factores como son climáticos, la falta de mantenimiento o el tráfico excesivo en algunos casos. Es en Polonia, donde estos tres factores hacen que la composición de muchas carreteras se degrade rápidamente, sobre todo debido a las condiciones meteorológicas sufridas a lo largo del año, con temperaturas que van desde 30° C en verano a -20° C en invierno. Esto hace que la composición de las carreteras sufra mucho y el asfalto se levante, lo que aumenta los costos de mantenimiento y los accidentes de carretera. Este proyecto parte de la base de investigación que se lleva a cabo en la LUT, tratando de mejorar el análisis de las muestras de asfalto, por lo que se realizarán las pruebas de estrés y encontrar soluciones para mejorar el asfalto en las carreteras polacas. Esto disminuiría notablemente el costo de mantenimiento. A pesar de no entrar en aspectos muy técnicos sobre el asfalto y su composición, se ha necesitado realizar un estudio profundo sobre todas sus características, para crear un código capaz de obtener los mejores resultados. Por estas razones, se ha desarrollado en Matlab, los algoritmos que permiten el estudio de los especímenes 3D de asfalto. Se ha utilizado este software, ya que Matlab es una poderosa herramienta matemática que permite operar con matrices para realización de operaciones rápidamente, permitiendo desarrollar un código específico para el tratamiento y procesamiento de imágenes en 3D. Gracias a esta herramienta, estos algoritmos realizan procesos tales como, la segmentación de la imagen 3D, pre y post procesamiento de la imagen, filtrado o todo tipo de análisis microestructural de las muestras de asfalto que se están estudiando. El código presentado para la segmentación de las muestras de asfalto 3D es menos complejo en su diseño y desarrollo, debido a las herramientas de procesamiento de imágenes que incluye Matlab, que facilitan significativamente la tarea de programación, así como el método de segmentación utilizado. Respecto al código, este ha sido diseñado teniendo en cuenta el objetivo de facilitar el trabajo de análisis y estudio de las imágenes en 3D de las muestras de asfalto. Por lo tanto, el principal objetivo es el de crear una herramienta para el estudio de este código, por ello fue desarrollado para que pueda ser integrado en un entorno visual, de manera que sea más fácil y simple su utilización. Ese es el motivo por el cual todos estos algoritmos y funciones, que ha sido desarrolladas, se integrarán en una herramienta visual que se ha desarrollado con el GUIDE de Matlab. Esta herramienta ha sido creada en colaboración con Jorge Vega, y fue desarrollada en su proyecto final de carrera, cuyo título es: Segmentación microestructural de Imágenes en 3D de la muestra de asfalto utilizando Matlab. En esta herramienta se ha utilizado todo las funciones programadas en este proyecto, y tiene el objetivo de desarrollar una herramienta que permita crear un entorno gráfico intuitivo y de fácil uso para el estudio de las muestras de 3D de asfalto. Este proyecto se ha dividido en 4 capítulos, en un primer lugar estará la introducción, donde se presentarán los aspectos más importante que se va a componer el proyecto. En el segundo capítulo se presentarán todos los datos técnicos que se han tenido que estudiar para desarrollar la herramienta, entre los que cabe los tres temas más importantes que se han estudiado en este proyecto: materiales asfálticos, los principios de la tomografías 3D y el procesamiento de imágenes. Esta será la base para el tercer capítulo, que expondrá la metodología utilizada en la elaboración del código, con la explicación del entorno de trabajo utilizado en Matlab y todas las funciones de procesamiento de imágenes utilizadas. Además, se muestra todo el código desarrollado, así como una descripción teórica de los métodos utilizados para el pre-procesamiento y segmentación de las imagenes en 3D. En el capítulo 4, se mostrarán los resultados obtenidos en el estudio de una de las muestras de asfalto, y, finalmente, el último capítulo se basa en las conclusiones sobre el desarrollo de este proyecto. En este proyecto se ha llevado han realizado todos los puntos que se establecieron como punto de partida en el anteproyecto para crear la herramienta, a pesar de que se ha dejado para futuros proyectos nuevas posibilidades de este codigo, como por ejemplo, la detección automática de las diferentes regiones de una muestra de asfalto debido a su composición. Como se muestra en este proyecto, las técnicas de procesamiento de imágenes se utilizan cada vez más en multitud áreas, como pueden ser industriales o médicas. En consecuencia, este tipo de proyecto tiene multitud de posibilidades, y pudiendo ser la base para muchas nuevas aplicaciones que se puedan desarrollar en un futuro. Por último, se concluye que este proyecto ha contribuido a fortalecer las habilidades de programación, ampliando el conocimiento de Matlab y de la teoría de procesamiento de imágenes. Del mismo modo, este trabajo proporciona una base para el desarrollo de un proyecto más amplio cuyo alcance será una herramienta que puedas ser utilizada por el equipo de investigación de la Universidad Tecnológica de Lodz y en futuros proyectos. ABSTRACT In this project has been developed one code in MATLAB to process X-ray tomographic 3D images of asphalt specimens. These images 3D has been taken by a research team of the Lodz University of Technology (LUT). The aim of this project is to create a tool that can be used to study differents asphalt specimen and can be used to study them after stress tests undergoing the samples. With the final goal to find solutions to the degradation suffered roads in Poland due to differents causes, like weather conditions. The degradation of the roads is an issue that has been investigated many years ago, due to strong degradation suffered caused by various factors such as climate, poor maintenance or excessive traffic in some cases. It is in Poland where these three factors make the composition of many roads degrade rapidly, especially due to the weather conditions suffered along the year, with temperatures ranging from 30 o C in summer to -20 ° C in winter. This causes the roads suffers a lot and asphalt rises shortly after putting, increasing maintenance costs and road accident. This project part of the base that research is taking place at the LUT, in order to better analyze the asphalt specimens, they are tested for stress and find solutions to improve the asphalt on Polish roads. This would decrease remarkable maintenance cost. Although this project will not go into the technical aspect as asphalt and composition, but it has been required a deep study about all of its features, to create a code able to obtain the best results. For these reasons, there have been developed in Matlab, algorithms that allow the study of 3D specimens of asphalt. Matlab is a powerful mathematical tool, which allows arrays operate fastly, allowing to develop specific code for the treatment and processing of 3D images. Thus, these algorithms perform processes such as the multidimensional matrix sgementation, pre and post processing with the same filtering algorithms or microstructural analysis of asphalt specimen which being studied. All these algorithms and function that has been developed to be integrated into a visual tool which it be developed with the GUIDE of Matlab. This tool has been created in the project of Jorge Vega which name is: Microstructural segmentation of 3D images of asphalt specimen using Matlab engine. In this tool it has been used all the functions programmed in this project, and it has the aim to develop an easy and intuitive graphical environment for the study of 3D samples of asphalt. This project has been divided into 4 chapters plus the introduction, the second chapter introduces the state-of-the-art of the three of the most important topics that have been studied in this project: asphalt materials, principle of X-ray tomography and image processing. This will be the base for the third chapter, which will outline the methodology used in developing the code, explaining the working environment of Matlab and all the functions of processing images used. In addition, it will be shown all the developed code created, as well as a theoretical description of the methods used for preprocessing and 3D image segmentation. In Chapter 4 is shown the results obtained from the study of one of the specimens of asphalt, and finally the last chapter draws the conclusions regarding the development of this project.