959 resultados para Techniques: Image Processing
Resumo:
Morphometric investigations using a point and intersection counting strategy in the lung often are not able to reveal the full set of morphologic changes. This happens particularly when structural modifications are not expressed in terms of volume density changes and when rough and fine surface density alterations cancel each other at different magnifications. Making use of digital image processing, we present a methodological approach that allows to easily and quickly quantify changes of the geometrical properties of the parenchymal lung structure and reflects closely the visual appreciation of the changes. Randomly sampled digital images from light microscopic sections of lung parenchyma are filtered, binarized, and skeletonized. The lung septa are thus represented as a single-pixel wide line network with nodal points and end points and the corresponding internodal and end segments. By automatically counting the number of points and measuring the lengths of the skeletal segments, the lung architecture can be characterized and very subtle structural changes can be detected. This new methodological approach to lung structure analysis is highly sensitive to morphological changes in the parenchyma: it detected highly significant quantitative alterations in the structure of lungs of rats treated with a glucocorticoid hormone, where the classical morphometry had partly failed.
Resumo:
The structure of the human immunodeficiency virus (HIV) and some of its components have been difficult to study in three-dimensions (3D) primarily because of their intrinsic structural variability. Recent advances in cryoelectron tomography (cryo-ET) have provided a new approach for determining the 3D structures of the intact virus, the HIV capsid, and the envelope glycoproteins located on the viral surface. A number of cryo-ET procedures related to specimen preservation, data collection, and image processing are presented in this chapter. The techniques described herein are well suited for determining the ultrastructure of bacterial and viral pathogens and their associated molecular machines in situ at nanometer resolution.
Resumo:
Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.
Resumo:
Purpose Malposition of the acetabular component in total hip arthroplasty (THA) is a common surgical problem that can lead to hip dislocation, reduced range of motion and may result in early loosening. The aim of this study is to validate the accuracy and reproducibility of a single x-ray image based 2D/3D reconstruction technique in determining cup inclination and anteversion against two different computer tomography (CT)-based measurement techniques. Methods Cup anteversion and inclination of 20 patients after cementless primary THA was measured on standard anteroposterior (AP) radiographs with the help of the single x-ray 2D/3D reconstruction program and compared with two different 3D CT-based analyses [Ground Truth (GT) and MeVis (MV) reconstruction model]. Results The measurements from the single x-ray 2D/3D reconstruction technique were strongly correlated with both types of CT image-processing protocols for both cup inclination [R²=0.69 (GT); R²=0.59 (MV)] and anteversion [R²=0.89 (GT); R²=0.80 (MV)]. Conclusions The single x-ray image based 2D/3D reconstruction technique is a feasible method to assess cup position on postoperative x-rays. CTscans remain the golden standard for a more complex biomechanical evaluation when a lower tolerance limit (+/-2 degrees) is required.
Resumo:
PURPOSE The purpose of this study was to identify morphologic factors affecting type I endoleak formation and bird-beak configuration after thoracic endovascular aortic repair (TEVAR). METHODS Computed tomography (CT) data of 57 patients (40 males; median age, 66 years) undergoing TEVAR for thoracic aortic aneurysm (34 TAA, 19 TAAA) or penetrating aortic ulcer (n = 4) between 2001 and 2010 were retrospectively reviewed. In 28 patients, the Gore TAG® stent-graft was used, followed by the Medtronic Valiant® in 16 cases, the Medtronic Talent® in 8, and the Cook Zenith® in 5 cases. Proximal landing zone (PLZ) was in zone 1 in 13, zone 2 in 13, zone 3 in 23, and zone 4 in 8 patients. In 14 patients (25%), the procedure was urgent or emergent. In each case, pre- and postoperative CT angiography was analyzed using a dedicated image processing workstation and complimentary in-house developed software based on a 3D cylindrical intensity model to calculate aortic arch angulation and conicity of the landing zones (LZ). RESULTS Primary type Ia endoleak rate was 12% (7/57) and subsequent re-intervention rate was 86% (6/7). Left subclavian artery (LSA) coverage (p = 0.036) and conicity of the PLZ (5.9 vs. 2.6 mm; p = 0.016) were significantly associated with an increased type Ia endoleak rate. Bird-beak configuration was observed in 16 patients (28%) and was associated with a smaller radius of the aortic arch curvature (42 vs. 65 mm; p = 0.049). Type Ia endoleak was not associated with a bird-beak configuration (p = 0.388). Primary type Ib endoleak rate was 7% (4/57) and subsequent re-intervention rate was 100%. Conicity of the distal LZ was associated with an increased type Ib endoleak rate (8.3 vs. 2.6 mm; p = 0.038). CONCLUSIONS CT-based 3D aortic morphometry helps to identify risk factors of type I endoleak formation and bird-beak configuration during TEVAR. These factors were LSA coverage and conicity within the landing zones for type I endoleak formation and steep aortic angulation for bird-beak configuration.
Resumo:
The focal point of this paper is to propose and analyze a P 0 discontinuous Galerkin (DG) formulation for image denoising. The scheme is based on a total variation approach which has been applied successfully in previous papers on image processing. The main idea of the new scheme is to model the restoration process in terms of a discrete energy minimization problem and to derive a corresponding DG variational formulation. Furthermore, we will prove that the method exhibits a unique solution and that a natural maximum principle holds. In addition, a number of examples illustrate the effectiveness of the method.
Resumo:
We present a non-conformal metric that generalizes the geodesic active contours approach for image segmentation. The new metric is obtained by adding to the Euclidean metric an additional term that penalizes the misalignment of the curve with the image gradient and multiplying the resulting metric by a conformal factor that depends on the edge intensity. In this way, a closer fitting to the edge direction results. The provided experimental results address the computation of the geodesics of the new metric by applying a gradient descent to externally provided curves. The good performance of the proposed techniques is demonstrated in comparison with other active contours methods.
Resumo:
The reconstruction of the cell lineage tree of early zebrafish embryogenesis requires the use of in-vivo microscopy imaging and image processing strategies. Second (SHG) and third harmonic generation (THG) microscopy observations in unstained zebrafish embryos allows to detect cell divisions and cell membranes from 1-cell to 1K-cell stage. In this article, we present an ad-hoc image processing pipeline for cell tracking and cell membranes segmentation enabling the reconstruction of the early zebrafish cell lineage tree until the 1K-cell stage. This methodology has been used to obtain digital zebrafish embryos allowing to generate a quantitative description of early zebrafish embryogenesis with minute temporal accuracy and μm spatial resolution
Resumo:
The increase of multimedia services delivered over packet-based networks has entailed greater quality expectations of the end-users. This has led to an intensive research on techniques for evaluating the quality of experience perceived by the viewers of audiovisual content, considering the different degradations that it could suffer along the broadcasting system. In this paper, a comprehensive study of the impact of transmission errors affecting video and audio in IPTV is presented. With this aim, subjective assessment tests were carried out proposing a novel methodology trying to keep as close as possible home environment viewing conditions. Also 3DTV content in side-by-side format has been used in the experiments to compare the impact of the degradations. The results provide a better understanding of the effects of transmission errors, and show that the QoE related to the first approach of 3DTV is acceptable, but the visual discomfort that it causes should be reduced.
Resumo:
We present a novel approach for detecting severe obstructive sleep apnea (OSA) cases by introducing non-linear analysis into sustained speech characterization. The proposed scheme was designed for providing additional information into our baseline system, built on top of state-of-the-art cepstral domain modeling techniques, aiming to improve accuracy rates. This new information is lightly correlated with our previous MFCC modeling of sustained speech and uncorrelated with the information in our continuous speech modeling scheme. Tests have been performed to evaluate the improvement for our detection task, based on sustained speech as well as combined with a continuous speech classifier, resulting in a 10% relative reduction in classification for the first and a 33% relative reduction for the fused scheme. Results encourage us to consider the existence of non-linear effects on OSA patients' voices, and to think about tools which could be used to improve short-time analysis.
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
Resumo:
The image by Computed Tomography is a non-invasive alternative for observing soil structures, mainly pore space. The pore space correspond in soil data to empty or free space in the sense that no material is present there but only fluids, the fluid transport depend of pore spaces in soil, for this reason is important identify the regions that correspond to pore zones. In this paper we present a methodology in order to detect pore space and solid soil based on the synergy of the image processing, pattern recognition and artificial intelligence. The mathematical morphology is an image processing technique used for the purpose of image enhancement. In order to find pixels groups with a similar gray level intensity, or more or less homogeneous groups, a novel image sub-segmentation based on a Possibilistic Fuzzy c-Means (PFCM) clustering algorithm was used. The Artificial Neural Networks (ANNs) are very efficient for demanding large scale and generic pattern recognition applications for this reason finally a classifier based on artificial neural network is applied in order to classify soil images in two classes, pore space and solid soil respectively.
Resumo:
This paper outlines an automatic computervision system for the identification of avena sterilis which is a special weed seed growing in cereal crops. The final goal is to reduce the quantity of herbicide to be sprayed as an important and necessary step for precision agriculture. So, only areas where the presence of weeds is important should be sprayed. The main problems for the identification of this kind of weed are its similar spectral signature with respect the crops and also its irregular distribution in the field. It has been designed a new strategy involving two processes: image segmentation and decision making. The image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and weeds. The decision making is based on the SupportVectorMachines and determines if a cell must be sprayed. The main findings of this paper are reflected in the combination of the segmentation and the SupportVectorMachines decision processes. Another important contribution of this approach is the minimum requirements of the system in terms of memory and computation power if compared with other previous works. The performance of the method is illustrated by comparative analysis against some existing strategies.
Resumo:
El objetivo principal del proyecto es la realización de una aplicación en el programa MATLAB. En primer lugar, realizaremos un estudio teórico relativo al tema de nuestro proyecto. En nuestro caso como el tema es Imagen y Televisión, explicaremos de forma teórica la información principal acerca del Tratamiento Digital de la Imagen. Una vez conocida las técnicas principales utilizadas en el tratamiento digital, realizaremos un estudio exhaustivo en las técnicas actuales que existen acerca del análisis de imágenes. Daremos una breve explicación mostrando en qué consiste esta técnica, los diferentes pasos que se llevan a cabo en una imagen para su análisis, explicando brevemente cada unos de ellos y enumerando algunas técnicas para la realización de cada una de ellas. Tras esta primera parte, nos centraremos en las técnicas de correlación de imágenes (DIC). Explicaremos como han surgido estas técnicas, cual son sus principales conceptos, sus inicios y las ventajas e inconvenientes que tienen. Dentro de las diferentes técnicas de correlación de imágenes, explicaremos de forma detallada la correspondencia por áreas, ya que es la técnica que vamos a utilizar para la realización del proyecto. Explicaremos en qué consiste, y desarrollaremos teóricamente cual son los pasos que se deben realizar en las imágenes para realizar esta técnica. Explicaremos cual es su terminología, y cuáles son los posibles defectos que puede tener esta técnica. Finalmente, una vez estudiada la teoría, realizaremos una sencilla aplicación que nos permita evaluar y encontrar las diferencias en una secuencia de imágenes. El programa utilizado para este proyecto es MATLAB, que es un programa matemático, utilizado enormemente en el ámbito de la ingeniería. Mediante esta aplicación obtendremos dos figuras, una de ellas donde veremos los vectores de movimiento que existen entre las dos imágenes y la segunda, donde obtendremos el factor de correlación que hay entre las dos imágenes. ABSTRACT OF MY PROJECT The main objective of the project is the development of an application in MATLAB program. Firstly carry out a theoretical study on the topic of our project. In our case as the theme is Picture and Television, we explain the main information about Digital Image Processing. Once known the main techniques used in digital images, we will make a study on current techniques that exist about image analysis. We will give a brief explanation showing what this technique is, the different steps that are performed on an image for analysis, briefly explaining each of them and listing some techniques for performing each. After this first part, we will focus on the techniques of image correlation (DIC). We explain how these techniques have emerged, which are the main concepts, the beginning and the advantages and disadvantages they have. There are different image correlation techniques. We will explain in detail the correspondence areas, as it is the technique that we will use for the project. Explain what it is, which is theoretically and we develop steps that must be performed on the images for this technique. We explain what their terminology is, and what are the possible defects that may have this technique. Finally, having explored the theory images, we will make a simple application that allows us to evaluate and find differences in a sequence of images. The program used for this project is MATLAB, a mathematical program, widely used in the field of engineering. Using this application will get two figures, one where we will see the motion vectors between the two images and the second where we get the correlation factor between the two images.
Resumo:
La termografía es un método de inspección y diagnóstico basado en la radiación infrarroja que emiten los cuerpos. Permite medir dicha radiación a distancia y sin contacto, obteniendo un termograma o imagen termográfica, objeto de estudio de este proyecto. Todos los cuerpos que se encuentren a una cierta temperatura emiten radiación infrarroja. Sin embargo, para hacer una inspección termográfica hay que tener en cuenta la emisividad de los cuerpos, capacidad que tienen de emitir radiación, ya que ésta no sólo depende de la temperatura del cuerpo, sino también de sus características superficiales. Las herramientas necesarias para conseguir un termograma son principalmente una cámara termográfica y un software que permita su análisis. La cámara percibe la emisión infrarroja de un objeto y lo convierte en una imagen visible, originalmente monocromática. Sin embargo, después es coloreada por la propia cámara o por un software para una interpretación más fácil del termograma. Para obtener estas imágenes termográficas existen varias técnicas, que se diferencian en cómo la energía calorífica se transfiere al cuerpo. Estas técnicas se clasifican en termografía pasiva, activa y vibrotermografía. El método que se utiliza en cada caso depende de las características térmicas del cuerpo, del tipo de defecto a localizar o la resolución espacial de las imágenes, entre otros factores. Para analizar las imágenes y así obtener diagnósticos y detectar defectos, es importante la precisión. Por ello existe un procesado de las imágenes, para minimizar los efectos provocados por causas externas, mejorar la calidad de la imagen y extraer información de las inspecciones realizadas. La termografía es un método de ensayo no destructivo muy flexible y que ofrece muchas ventajas. Por esta razón el campo de aplicación es muy amplio, abarcando desde aplicaciones industriales hasta investigación y desarrollo. Vigilancia y seguridad, ahorro energético, medicina o medio ambiente, son algunos de los campos donde la termografía aportaimportantes beneficios. Este proyecto es un estudio teórico de la termografía, donde se describen detalladamente cada uno de los aspectos mencionados. Concluye con una aplicación práctica, creando una cámara infrarroja a partir de una webcam, y realizando un análisis de las imágenes obtenidas con ella. Con esto se demuestran algunas de las teorías explicadas, así como la posibilidad de reconocer objetos mediante la termografía. Thermography is a method of testing and diagnosis based on the infrared radiation emitted by bodies. It allows to measure this radiation from a distance and with no contact, getting a thermogram or thermal image, object of study of this project. All bodies that are at a certain temperature emit infrared radiation. However, making a thermographic inspection must take into account the emissivity of the body, capability of emitting radiation. This not only depends on the temperature of the body, but also on its surface characteristics. The tools needed to get a thermogram are mainly a thermal imaging camera and software that allows analysis. The camera sees the infrared emission of an object and converts it into a visible image, originally monochrome. However, after it is colored by the camera or software for easier interpretation of thermogram. To obtain these thermal images it exists various techniques, which differ in how heat energy is transferred to the body. These techniques are classified into passive thermography, active and vibrotermografy. The method used in each case depends on the thermal characteristics of the body, the type of defect to locate or spatial resolution of images, among other factors. To analyze the images and obtain diagnoses and defects, accuracy is important. Thus there is a image processing to minimize the effects caused by external causes, improving image quality and extract information from inspections. Thermography is a non-‐destructive test method very flexible and offers many advantages. So the scope is very wide, ranging from industrial applications to research and development.Surveillance and security, energy saving, environmental or medicine are some of the areas where thermography provides significant benefits. This project is a theoretical study of thermography, which describes in detail each of these aspects. It concludes with a practical application, creating an infrared camera from a webcam, and making an analysis of the images obtained with it. This will demonstrate some of the theories explained as well as the ability to recognize objects by thermography.