935 resultados para Optical signal and image processing device


Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este proyecto se ha desarrollado un código de MATLAB para el procesamiento de imágenes tomográficas 3D, de muestras de asfalto de carreteras en Polonia. Estas imágenes en 3D han sido tomadas por un equipo de investigación de la Universidad Tecnológica de Lodz (LUT). El objetivo de este proyecto es crear una herramienta que se pueda utilizar para estudiar las diferentes muestras de asfalto 3D y pueda servir para estudiar las pruebas de estrés que experimentan las muestras en el laboratorio. Con el objetivo final de encontrar soluciones a la degradación sufrida en las carreteras de Polonia, debido a diferentes causas, como son las condiciones meteorológicas. La degradación de las carreteras es un tema que se ha investigado desde hace muchos años, debido a la fuerte degradación causada por diferentes factores como son climáticos, la falta de mantenimiento o el tráfico excesivo en algunos casos. Es en Polonia, donde estos tres factores hacen que la composición de muchas carreteras se degrade rápidamente, sobre todo debido a las condiciones meteorológicas sufridas a lo largo del año, con temperaturas que van desde 30° C en verano a -20° C en invierno. Esto hace que la composición de las carreteras sufra mucho y el asfalto se levante, lo que aumenta los costos de mantenimiento y los accidentes de carretera. Este proyecto parte de la base de investigación que se lleva a cabo en la LUT, tratando de mejorar el análisis de las muestras de asfalto, por lo que se realizarán las pruebas de estrés y encontrar soluciones para mejorar el asfalto en las carreteras polacas. Esto disminuiría notablemente el costo de mantenimiento. A pesar de no entrar en aspectos muy técnicos sobre el asfalto y su composición, se ha necesitado realizar un estudio profundo sobre todas sus características, para crear un código capaz de obtener los mejores resultados. Por estas razones, se ha desarrollado en Matlab, los algoritmos que permiten el estudio de los especímenes 3D de asfalto. Se ha utilizado este software, ya que Matlab es una poderosa herramienta matemática que permite operar con matrices para realización de operaciones rápidamente, permitiendo desarrollar un código específico para el tratamiento y procesamiento de imágenes en 3D. Gracias a esta herramienta, estos algoritmos realizan procesos tales como, la segmentación de la imagen 3D, pre y post procesamiento de la imagen, filtrado o todo tipo de análisis microestructural de las muestras de asfalto que se están estudiando. El código presentado para la segmentación de las muestras de asfalto 3D es menos complejo en su diseño y desarrollo, debido a las herramientas de procesamiento de imágenes que incluye Matlab, que facilitan significativamente la tarea de programación, así como el método de segmentación utilizado. Respecto al código, este ha sido diseñado teniendo en cuenta el objetivo de facilitar el trabajo de análisis y estudio de las imágenes en 3D de las muestras de asfalto. Por lo tanto, el principal objetivo es el de crear una herramienta para el estudio de este código, por ello fue desarrollado para que pueda ser integrado en un entorno visual, de manera que sea más fácil y simple su utilización. Ese es el motivo por el cual todos estos algoritmos y funciones, que ha sido desarrolladas, se integrarán en una herramienta visual que se ha desarrollado con el GUIDE de Matlab. Esta herramienta ha sido creada en colaboración con Jorge Vega, y fue desarrollada en su proyecto final de carrera, cuyo título es: Segmentación microestructural de Imágenes en 3D de la muestra de asfalto utilizando Matlab. En esta herramienta se ha utilizado todo las funciones programadas en este proyecto, y tiene el objetivo de desarrollar una herramienta que permita crear un entorno gráfico intuitivo y de fácil uso para el estudio de las muestras de 3D de asfalto. Este proyecto se ha dividido en 4 capítulos, en un primer lugar estará la introducción, donde se presentarán los aspectos más importante que se va a componer el proyecto. En el segundo capítulo se presentarán todos los datos técnicos que se han tenido que estudiar para desarrollar la herramienta, entre los que cabe los tres temas más importantes que se han estudiado en este proyecto: materiales asfálticos, los principios de la tomografías 3D y el procesamiento de imágenes. Esta será la base para el tercer capítulo, que expondrá la metodología utilizada en la elaboración del código, con la explicación del entorno de trabajo utilizado en Matlab y todas las funciones de procesamiento de imágenes utilizadas. Además, se muestra todo el código desarrollado, así como una descripción teórica de los métodos utilizados para el pre-procesamiento y segmentación de las imagenes en 3D. En el capítulo 4, se mostrarán los resultados obtenidos en el estudio de una de las muestras de asfalto, y, finalmente, el último capítulo se basa en las conclusiones sobre el desarrollo de este proyecto. En este proyecto se ha llevado han realizado todos los puntos que se establecieron como punto de partida en el anteproyecto para crear la herramienta, a pesar de que se ha dejado para futuros proyectos nuevas posibilidades de este codigo, como por ejemplo, la detección automática de las diferentes regiones de una muestra de asfalto debido a su composición. Como se muestra en este proyecto, las técnicas de procesamiento de imágenes se utilizan cada vez más en multitud áreas, como pueden ser industriales o médicas. En consecuencia, este tipo de proyecto tiene multitud de posibilidades, y pudiendo ser la base para muchas nuevas aplicaciones que se puedan desarrollar en un futuro. Por último, se concluye que este proyecto ha contribuido a fortalecer las habilidades de programación, ampliando el conocimiento de Matlab y de la teoría de procesamiento de imágenes. Del mismo modo, este trabajo proporciona una base para el desarrollo de un proyecto más amplio cuyo alcance será una herramienta que puedas ser utilizada por el equipo de investigación de la Universidad Tecnológica de Lodz y en futuros proyectos. ABSTRACT In this project has been developed one code in MATLAB to process X-ray tomographic 3D images of asphalt specimens. These images 3D has been taken by a research team of the Lodz University of Technology (LUT). The aim of this project is to create a tool that can be used to study differents asphalt specimen and can be used to study them after stress tests undergoing the samples. With the final goal to find solutions to the degradation suffered roads in Poland due to differents causes, like weather conditions. The degradation of the roads is an issue that has been investigated many years ago, due to strong degradation suffered caused by various factors such as climate, poor maintenance or excessive traffic in some cases. It is in Poland where these three factors make the composition of many roads degrade rapidly, especially due to the weather conditions suffered along the year, with temperatures ranging from 30 o C in summer to -20 ° C in winter. This causes the roads suffers a lot and asphalt rises shortly after putting, increasing maintenance costs and road accident. This project part of the base that research is taking place at the LUT, in order to better analyze the asphalt specimens, they are tested for stress and find solutions to improve the asphalt on Polish roads. This would decrease remarkable maintenance cost. Although this project will not go into the technical aspect as asphalt and composition, but it has been required a deep study about all of its features, to create a code able to obtain the best results. For these reasons, there have been developed in Matlab, algorithms that allow the study of 3D specimens of asphalt. Matlab is a powerful mathematical tool, which allows arrays operate fastly, allowing to develop specific code for the treatment and processing of 3D images. Thus, these algorithms perform processes such as the multidimensional matrix sgementation, pre and post processing with the same filtering algorithms or microstructural analysis of asphalt specimen which being studied. All these algorithms and function that has been developed to be integrated into a visual tool which it be developed with the GUIDE of Matlab. This tool has been created in the project of Jorge Vega which name is: Microstructural segmentation of 3D images of asphalt specimen using Matlab engine. In this tool it has been used all the functions programmed in this project, and it has the aim to develop an easy and intuitive graphical environment for the study of 3D samples of asphalt. This project has been divided into 4 chapters plus the introduction, the second chapter introduces the state-of-the-art of the three of the most important topics that have been studied in this project: asphalt materials, principle of X-ray tomography and image processing. This will be the base for the third chapter, which will outline the methodology used in developing the code, explaining the working environment of Matlab and all the functions of processing images used. In addition, it will be shown all the developed code created, as well as a theoretical description of the methods used for preprocessing and 3D image segmentation. In Chapter 4 is shown the results obtained from the study of one of the specimens of asphalt, and finally the last chapter draws the conclusions regarding the development of this project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an adaptive unequal error protection (UEP) strategy built on the 1-D interleaved parity Application Layer Forward Error Correction (AL-FEC) code for protecting the transmission of stereoscopic 3D video content encoded with Multiview Video Coding (MVC) through IP-based networks. Our scheme targets the minimization of quality degradation produced by packet losses during video transmission in time-sensitive application scenarios. To that end, based on a novel packet-level distortion model, it selects in real time the most suitable packets within each Group of Pictures (GOP) to be protected and the most convenient FEC technique parameters, i.e., the size of the FEC generator matrix. In order to make these decisions, it considers the relevance of the packet, the behavior of the channel, and the available bitrate for protection purposes. Simulation results validate both the distortion model introduced to estimate the importance of packets and the optimization of the FEC technique parameter values.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El desarrollo de las técnicas de imágenes por resonancia magnética han permitido el estudio y cuantificación, in vivo, de los cambios que ocurren en la morfología cerebral ligados a procesos tales como el neurodesarrollo, el envejecimiento, el aprendizaje o la enfermedad. Un gran número de métodos de morfometría han sido desarrollados con el fin de extraer la información contenida en estas imágenes y traducirla en indicadores de forma o tamaño, tales como el volumen o el grosor cortical; marcadores que son posteriormente empleados para encontrar diferencias estadísticas entre poblaciones de sujetos o realizar correlaciones entre la morfología cerebral y, por ejemplo, la edad o la severidad de determinada enfermedad. A pesar de la amplia variedad de biomarcadores y metodologías de morfometría, muchos estudios sesgan sus hipótesis, y con ello los resultados experimentales, al empleo de un número reducido de biomarcadores o a al uso de una única metodología de procesamiento. Con el presente trabajo se pretende demostrar la importancia del empleo de diversos métodos de morfometría para lograr una mejor caracterización del proceso que se desea estudiar. En el mismo se emplea el análisis de forma para detectar diferencias, tanto globales como locales, en la morfología del tálamo entre pacientes adolescentes con episodios tempranos de psicosis y adolescentes sanos. Los resultados obtenidos demuestran que la diferencia de volumen talámico entre ambas poblaciones de sujetos, previamente descrita en la literatura, se debe a una reducción del volumen de la región anterior-mediodorsal y del núcleo pulvinar del tálamo de los pacientes respecto a los sujetos sanos. Además, se describe el desarrollo de un estudio longitudinal, en sujetos sanos, que emplea simultáneamente distintos biomarcadores para la caracterización y cuantificación de los cambios que ocurren en la morfología de la corteza cerebral durante la adolescencia. A través de este estudio se revela que el proceso de “alisado” que experimenta la corteza cerebral durante la adolescencia es consecuencia de una disminución de la profundidad, ligada a un incremento en el ancho, de los surcos corticales. Finalmente, esta metodología es aplicada, en un diseño transversal, para el estudio de las causas que provocan el decrecimiento tanto del grosor cortical como del índice de girificación en adolescentes con episodios tempranos de psicosis. ABSTRACT The ever evolving sophistication of magnetic resonance image techniques continue to provide new tools to characterize and quantify, in vivo, brain morphologic changes related to neurodevelopment, senescence, learning or disease. The majority of morphometric methods extract shape or size descriptors such as volume, surface area, and cortical thickness from the MRI image. These morphological measurements are commonly entered in statistical analytic approaches for testing between-group differences or for correlations between the morphological measurement and other variables such as age, sex, or disease severity. A wide variety of morphological biomarkers are reported in the literature. Despite this wide range of potentially useful biomarkers and available morphometric methods, the hypotheses and findings of the grand majority of morphological studies are biased because reports assess only one morphometric feature and usually use only one image processing method. Throughout this dissertation biomarkers and image processing strategies are combined to provide innovative and useful morphometric tools for examining brain changes during neurodevelopment. Specifically, a shape analysis technique allowing for a fine-grained assessment of regional thalamic volume in early-onset psychosis patients and healthy comparison subjects is implemented. Results show that disease-related reductions in global thalamic volume, as previously described by other authors, could be particularly driven by a deficit in the anterior-mediodorsal and pulvinar thalamic regions in patients relative to healthy subjects. Furthermore, in healthy adolescents different cortical features are extracted and combined and their interdependency is assessed over time. This study attempts to extend current knowledge of normal brain development, specifically the largely unexplored relationship between changes of distinct cortical morphological measurements during adolescence. This study demonstrates that cortical flattening, present during adolescence, is produced by a combination of age-related increase in sulcal width and decrease in sulcal depth. Finally, this methodology is applied to a cross-sectional study, investigating the mechanisms underlying the decrease in cortical thickness and gyrification observed in psychotic patients with a disease onset during adolescence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present results for quadruple-junction inverted metamorphic (4J-IMM) devices under the concentrated direct spectrum and analyze the present limitations to performance. The devices integrate lattice-matched subcells with rear heterojunctions, as well as lattice-mismatched subcells with low threading dislocation density. To interconnect the subcells, thermally stable lattice-matched tunnel junctions are used, as well as a metamorphic GaAsSb/GaInAs tunnel junction between the lattice-mismatched subcells. A broadband antireflection coating is used, as well as a front metal grid designed for high concentration operation. The best device has a peak efficiency of (43.8 ± 2.2)% at 327-sun concentration, as measured with a spectrally adjustable flash simulator, and maintains an efficiency of (42.9 ± 2.1)% at 869 suns, which is the highest concentration measured. The Voc increases from 3.445 V at 1-sun to 4.10 V at 327-sun concentration, which indicates high material quality in all of the subcells. The subcell voltages are analyzed using optical modeling, and the present device limitations and pathways to improvement are discussed. Although further improvements are possible, the 4J-IMM structure is clearly capable of very high efficiency at concentration, despite the complications arising from utilizing lattice-mismatched subcells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Red blood cells (RBCs), previously fixed with glutaraldehyde, adhere to glass slides coated with fibrinogen. The RBC deposition process on the horizontal glass surface is investigated by analyzing the relative surface covered by the RBCs, as well as the variance of this surface coverage, as a function of the concentration of particles. This study is performed by optical microscopy and image analysis. A model, derived from the classical random sequential adsorption model, has been developed to account for the experimental results. This model highlights the strong influence of the hydrodynamic interactions during the deposition process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lectures about the course module "Advanced techniques for the human eye study: ocular aberrometry".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cox's theorem states that, under certain assumptions, any measure of belief is isomorphic to a probability measure. This theorem, although intended as a justification of the subjectivist interpretation of probability theory, is sometimes presented as an argument for more controversial theses. Of particular interest is the thesis that the only coherent means of representing uncertainty is via the probability calculus. In this paper I examine the logical assumptions of Cox's theorem and I show how these impinge on the philosophical conclusions thought to be supported by the theorem. I show that the more controversial thesis is not supported by Cox's theorem. (C) 2003 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Feature selection is important in medical field for many reasons. However, selecting important variables is a difficult task with the presence of censoring that is a unique feature in survival data analysis. This paper proposed an approach to deal with the censoring problem in endovascular aortic repair survival data through Bayesian networks. It was merged and embedded with a hybrid feature selection process that combines cox's univariate analysis with machine learning approaches such as ensemble artificial neural networks to select the most relevant predictive variables. The proposed algorithm was compared with common survival variable selection approaches such as; least absolute shrinkage and selection operator LASSO, and Akaike information criterion AIC methods. The results showed that it was capable of dealing with high censoring in the datasets. Moreover, ensemble classifiers increased the area under the roc curves of the two datasets collected from two centers located in United Kingdom separately. Furthermore, ensembles constructed with center 1 enhanced the concordance index of center 2 prediction compared to the model built with a single network. Although the size of the final reduced model using the neural networks and its ensembles is greater than other methods, the model outperformed the others in both concordance index and sensitivity for center 2 prediction. This indicates the reduced model is more powerful for cross center prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The increasing in world population, with higher proportion of elderly, leads to an increase in the number of individuals with vision loss and cataracts are one of the leading causes of blindness worldwide. Cataract is an eye disease that is the partial or total opacity of the crystalline lens (natural lens of the eye) or its capsule. It can be triggered by several factors such as trauma, age, diabetes mellitus, and medications, among others. It is known that the attendance by ophthalmologists in rural and poor areas in Brazil is less than needed and many patients with treatable diseases such as cataracts are undiagnosed and therefore untreated. In this context, this project presents the development of OPTICA, a system of teleophthalmology using smartphones for ophthalmic emergencies detection, providing a diagnostic aid for cataract using specialists systems and image processing techniques. The images are captured by a cellphone camera and along with a questionnaire filled with patient information are transmitted securely via the platform Mobile SANA to a online server that has an intelligent system available to assist in the diagnosis of cataract and provides ophthalmologists who analyze the information and write back the patient’s report. Thus, the OPTICA provides eye care to the poorest and least favored population, improving the screening of critically ill patients and increasing access to diagnosis and treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Light microscopic analysis of diatom frustules is widely used both in basic and applied research, notably taxonomy, morphometrics, water quality monitoring and paleo-environmental studies. In these applications, usually large numbers of frustules need to be identified and / or measured. Although there is a need for automation in these applications, and image processing and analysis methods supporting these tasks have previously been developed, they did not become widespread in diatom analysis. While methodological reports for a wide variety of methods for image segmentation, diatom identification and feature extraction are available, no single implementation combining a subset of these into a readily applicable workflow accessible to diatomists exists. Results: The newly developed tool SHERPA offers a versatile image processing workflow focused on the identification and measurement of object outlines, handling all steps from image segmentation over object identification to feature extraction, and providing interactive functions for reviewing and revising results. Special attention was given to ease of use, applicability to a broad range of data and problems, and supporting high throughput analyses with minimal manual intervention. Conclusions: Tested with several diatom datasets from different sources and of various compositions, SHERPA proved its ability to successfully analyze large amounts of diatom micrographs depicting a broad range of species. SHERPA is unique in combining the following features: application of multiple segmentation methods and selection of the one giving the best result for each individual object; identification of shapes of interest based on outline matching against a template library; quality scoring and ranking of resulting outlines supporting quick quality checking; extraction of a wide range of outline shape descriptors widely used in diatom studies and elsewhere; minimizing the need for, but enabling manual quality control and corrections. Although primarily developed for analyzing images of diatom valves originating from automated microscopy, SHERPA can also be useful for other object detection, segmentation and outline-based identification problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This papers examines the use of trajectory distance measures and clustering techniques to define normal
and abnormal trajectories in the context of pedestrian tracking in public spaces. In order to detect abnormal
trajectories, what is meant by a normal trajectory in a given scene is firstly defined. Then every trajectory
that deviates from this normality is classified as abnormal. By combining Dynamic Time Warping and a
modified K-Means algorithms for arbitrary-length data series, we have developed an algorithm for trajectory
clustering and abnormality detection. The final system performs with an overall accuracy of 83% and 75%
when tested in two different standard datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.