954 resultados para Image processing techniques
Resumo:
Spray characterization under flash boiling conditions was investigated utilizing a symmetric multi-hole injector applicable to the gasoline direct injection (GDI) engine. Tests were performed in a constant volume combustion vessel using a high-speed schlieren and Mie scattering imaging systems. Four fuels including n-heptane, 100% ethanol, pure ethanol blended with 15% iso-octane by volume, and test grade E85 were considered in the study. Experimental conditions included various ambient pressure, fuel temperature, and fuel injection pressure. Visualization of the vaporizing spray development was acquired by utilizing schlieren and laser-based Mie scattering techniques. Time evolved spray tip penetration, spray angle, and the ratio of the vapor to liquid region were analyzed by utilizing digital image processing techniques in MATLAB. This research outlines spray characteristics at flash boiling and non-flash boiling conditions. At flash boiling conditions it was observed that individual plumes merge together, leading to significant contraction in spray angle as compared to non-flash boiling conditions. The results indicate that at flash boiling conditions, spray formation and expansion of vapor region is dependent on momentum exchange offered by the ambient gas. A relation between momentum exchange and liquid spray angle formed was also observed.
Resumo:
This paper outlines an automatic computervision system for the identification of avena sterilis which is a special weed seed growing in cereal crops. The final goal is to reduce the quantity of herbicide to be sprayed as an important and necessary step for precision agriculture. So, only areas where the presence of weeds is important should be sprayed. The main problems for the identification of this kind of weed are its similar spectral signature with respect the crops and also its irregular distribution in the field. It has been designed a new strategy involving two processes: image segmentation and decision making. The image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and weeds. The decision making is based on the SupportVectorMachines and determines if a cell must be sprayed. The main findings of this paper are reflected in the combination of the segmentation and the SupportVectorMachines decision processes. Another important contribution of this approach is the minimum requirements of the system in terms of memory and computation power if compared with other previous works. The performance of the method is illustrated by comparative analysis against some existing strategies.
Resumo:
A method to achieve improvement in template size for an iris-recognition system is reported. To achieve this result, the biological characteristics of the human iris have been studied. Processing has been performed by image processing techniques, isolating the iris and enhancing the area of study, after which multi resolution analysis is made. Reduction of the pattern obtained has been obtained via statistical study.
Resumo:
This paper proposes an automatic expert system for accuracy crop row detection in maize fields based on images acquired from a vision system. Different applications in maize, particularly those based on site specific treatments, require the identification of the crop rows. The vision system is designed with a defined geometry and installed onboard a mobile agricultural vehicle, i.e. submitted to vibrations, gyros or uncontrolled movements. Crop rows can be estimated by applying geometrical parameters under image perspective projection. Because of the above undesired effects, most often, the estimation results inaccurate as compared to the real crop rows. The proposed expert system exploits the human knowledge which is mapped into two modules based on image processing techniques. The first one is intended for separating green plants (crops and weeds) from the rest (soil, stones and others). The second one is based on the system geometry where the expected crop lines are mapped onto the image and then a correction is applied through the well-tested and robust Theil–Sen estimator in order to adjust them to the real ones. Its performance is favorably compared against the classical Pearson product–moment correlation coefficient.
Resumo:
La relación entre la ingeniería y la medicina cada vez se está haciendo más estrecha, y debido a esto se ha creado una nueva disciplina, la bioingeniería, ámbito en el que se centra el proyecto. Este ámbito cobra gran interés debido al rápido desarrollo de nuevas tecnologías que en particular permiten, facilitan y mejoran la obtención de diagnósticos médicos respecto de los métodos tradicionales. Dentro de la bioingeniería, el campo que está teniendo mayor desarrollo es el de la imagen médica, gracias al cual se pueden obtener imágenes del interior del cuerpo humano con métodos no invasivos y sin necesidad de recurrir a la cirugía. Mediante métodos como la resonancia magnética, rayos X, medicina nuclear o ultrasonidos, se pueden obtener imágenes del cuerpo humano para realizar diagnósticos. Para que esas imágenes puedan ser utilizadas con ese fin hay que realizar un correcto tratamiento de éstas mediante técnicas de procesado digital. En ése ámbito del procesado digital de las imágenes médicas es en el que se ha realizado este proyecto. Gracias al desarrollo del tratamiento digital de imágenes con métodos de extracción de información, mejora de la visualización o resaltado de rasgos de interés de las imágenes, se puede facilitar y mejorar el diagnóstico de los especialistas. Por todo esto en una época en la que se quieren automatizar todos los procesos para mejorar la eficacia del trabajo realizado, el automatizar el procesado de las imágenes para extraer información con mayor facilidad, es muy útil. Actualmente una de las herramientas más potentes en el tratamiento de imágenes médicas es Matlab, gracias a su toolbox de procesado de imágenes. Por ello se eligió este software para el desarrollo de la parte práctica de este proyecto, su potencia y versatilidad simplifican la implementación de algoritmos. Este proyecto se estructura en dos partes. En la primera se realiza una descripción general de las diferentes modalidades de obtención de imágenes médicas y se explican los diferentes usos de cada método, dependiendo del campo de aplicación. Posteriormente se hace una descripción de las técnicas más importantes de procesado de imagen digital que han sido utilizadas en el proyecto. En la segunda parte se desarrollan cuatro aplicaciones en Matlab para ejemplificar el desarrollo de algoritmos de procesado de imágenes médicas. Dichas implementaciones demuestran la aplicación y utilidad de los conceptos explicados anteriormente en la parte teórica, como la segmentación y operaciones de filtrado espacial de la imagen, así como otros conceptos específicos. Las aplicaciones ejemplo desarrolladas han sido: obtención del porcentaje de metástasis de un tejido, diagnóstico de las deformidades de la columna vertebral, obtención de la MTF de una cámara de rayos gamma y medida del área de un fibroadenoma de una ecografía de mama. Por último, para cada una de las aplicaciones se detallará su utilidad en el campo de la imagen médica, los resultados obtenidos y su implementación en una interfaz gráfica para facilitar su uso. ABSTRACT. The relationship between medicine and engineering is becoming closer than ever giving birth to a recently appeared science field: bioengineering. This project is focused on this subject. This recent field is becoming more and more important due to the fast development of new technologies that provide tools to improve disease diagnosis, with regard to traditional procedures. In bioengineering the fastest growing field is medical imaging, in which we can obtain images of the inside of the human body without need of surgery. Nowadays by means of the medical modalities of magnetic resonance, X ray, nuclear medicine or ultrasound, we can obtain images to make a more accurate diagnosis. For those images to be useful within the medical field, they should be processed properly with some digital image processing techniques. It is in this field of digital medical image processing where this project is developed. Thanks to the development of digital image processing providing methods for data collection, improved visualization or data highlighting, diagnosis can be eased and facilitated. In an age where automation of processes is much sought, automated digital image processing to ease data collection is extremely useful. One of the most powerful image processing tools is Matlab, together with its image processing toolbox. That is the reason why that software was chosen to develop the practical algorithms in this project. This final project is divided into two main parts. Firstly, the different modalities for obtaining medical images will be described. The different usages of each method according to the application will also be specified. Afterwards we will give a brief description of the most important image processing tools that have been used in the project. Secondly, four algorithms in Matlab are implemented, to provide practical examples of medical image processing algorithms. This implementation shows the usefulness of the concepts previously explained in the first part, such as: segmentation or spatial filtering. The particular applications examples that have been developed are: calculation of the metastasis percentage of a tissue, diagnosis of spinal deformity, approximation to the MTF of a gamma camera, and measurement of the area of a fibroadenoma in an ultrasound image. Finally, for each of the applications developed, we will detail its usefulness within the medical field, the results obtained, and its implementation in a graphical user interface to ensure ease of use.
Resumo:
A nivel mundial, el cáncer de mama es el tipo de cáncer más frecuente además de una de las principales causas de muerte entre la población femenina. Actualmente, el método más eficaz para detectar lesiones mamarias en una etapa temprana es la mamografía. Ésta contribuye decisivamente al diagnóstico precoz de esta enfermedad que, si se detecta a tiempo, tiene una probabilidad de curación muy alta. Uno de los principales y más frecuentes hallazgos en una mamografía, son las microcalcificaciones, las cuales son consideradas como un indicador importante de cáncer de mama. En el momento de analizar las mamografías, factores como la capacidad de visualización, la fatiga o la experiencia profesional del especialista radiólogo hacen que el riesgo de omitir ciertas lesiones presentes se vea incrementado. Para disminuir dicho riesgo es importante contar con diferentes alternativas como por ejemplo, una segunda opinión por otro especialista o un doble análisis por el mismo. En la primera opción se eleva el coste y en ambas se prolonga el tiempo del diagnóstico. Esto supone una gran motivación para el desarrollo de sistemas de apoyo o asistencia en la toma de decisiones. En este trabajo de tesis se propone, se desarrolla y se justifica un sistema capaz de detectar microcalcificaciones en regiones de interés extraídas de mamografías digitalizadas, para contribuir a la detección temprana del cáncer demama. Dicho sistema estará basado en técnicas de procesamiento de imagen digital, de reconocimiento de patrones y de inteligencia artificial. Para su desarrollo, se tienen en cuenta las siguientes consideraciones: 1. Con el objetivo de entrenar y probar el sistema propuesto, se creará una base de datos de imágenes, las cuales pertenecen a regiones de interés extraídas de mamografías digitalizadas. 2. Se propone la aplicación de la transformada Top-Hat, una técnica de procesamiento digital de imagen basada en operaciones de morfología matemática. La finalidad de aplicar esta técnica es la de mejorar el contraste entre las microcalcificaciones y el tejido presente en la imagen. 3. Se propone un algoritmo novel llamado sub-segmentación, el cual está basado en técnicas de reconocimiento de patrones aplicando un algoritmo de agrupamiento no supervisado, el PFCM (Possibilistic Fuzzy c-Means). El objetivo es encontrar las regiones correspondientes a las microcalcificaciones y diferenciarlas del tejido sano. Además, con la finalidad de mostrar las ventajas y desventajas del algoritmo propuesto, éste es comparado con dos algoritmos del mismo tipo: el k-means y el FCM (Fuzzy c-Means). Por otro lado, es importante destacar que en este trabajo por primera vez la sub-segmentación es utilizada para detectar regiones pertenecientes a microcalcificaciones en imágenes de mamografía. 4. Finalmente, se propone el uso de un clasificador basado en una red neuronal artificial, específicamente un MLP (Multi-layer Perceptron). El propósito del clasificador es discriminar de manera binaria los patrones creados a partir de la intensidad de niveles de gris de la imagen original. Dicha clasificación distingue entre microcalcificación y tejido sano. ABSTRACT Breast cancer is one of the leading causes of women mortality in the world and its early detection continues being a key piece to improve the prognosis and survival. Currently, the most reliable and practical method for early detection of breast cancer is mammography.The presence of microcalcifications has been considered as a very important indicator ofmalignant types of breast cancer and its detection and classification are important to prevent and treat the disease. However, the detection and classification of microcalcifications continue being a hard work due to that, in mammograms there is a poor contrast between microcalcifications and the tissue around them. Factors such as visualization, tiredness or insufficient experience of the specialist increase the risk of omit some present lesions. To reduce this risk, is important to have alternatives such as a second opinion or a double analysis for the same specialist. In the first option, the cost increases and diagnosis time also increases for both of them. This is the reason why there is a great motivation for development of help systems or assistance in the decision making process. This work presents, develops and justifies a system for the detection of microcalcifications in regions of interest extracted fromdigitizedmammographies to contribute to the early detection of breast cancer. This systemis based on image processing techniques, pattern recognition and artificial intelligence. For system development the following features are considered: With the aim of training and testing the system, an images database is created, belonging to a region of interest extracted from digitized mammograms. The application of the top-hat transformis proposed. This image processing technique is based on mathematical morphology operations. The aim of this technique is to improve the contrast betweenmicrocalcifications and tissue present in the image. A novel algorithm called sub-segmentation is proposed. The sub-segmentation is based on pattern recognition techniques applying a non-supervised clustering algorithm known as Possibilistic Fuzzy c-Means (PFCM). The aim is to find regions corresponding to the microcalcifications and distinguish them from the healthy tissue. Furthermore,with the aim of showing themain advantages and disadvantages this is compared with two algorithms of same type: the k-means and the fuzzy c-means (FCM). On the other hand, it is important to highlight in this work for the first time the sub-segmentation is used for microcalcifications detection. Finally, a classifier based on an artificial neural network such as Multi-layer Perceptron is used. The purpose of this classifier is to discriminate froma binary perspective the patterns built from gray level intensity of the original image. This classification distinguishes between microcalcifications and healthy tissue.
Resumo:
O presente trabalho apresenta uma alternativa ao processo de classificação do defeito da segregação central em amostras de aço, utilizando as imagens digitais que são geradas durante o ensaio de Baumann. O algoritmo proposto tem como objetivo agregar as técnicas de processamento digital de imagens e o conhecimento dos especialistas sobre o defeito da segregação central, visando a classificação do defeito de referência. O algoritmo implementado inclui a identificação e a segmentação da linha segregada por meio da aplicação da transformada de Hough e limiar adaptativo. Adicionalmente, o algoritmo apresenta uma proposta para o mapeamento dos atributos da segregação central nos diferentes graus de severidade do defeito, em função dos critérios de continuidade e intensidade. O mapeamento foi realizado por meio da análise das características individuais, como comprimento, largura e área, dos elementos segmentados que compõem a linha segregada. A avaliação do desempenho do algoritmo foi realizada em dois momentos específicos, de acordo com sua fase de implementação. Para a realização da avaliação, foram analisadas 255 imagens de amostras reais, oriundas de duas usinas siderúrgicas, distribuídas nos diferentes graus de severidade. Os resultados da primeira fase de implementação mostram que a identificação da linha segregada apresenta acurácia de 93%. As classificações oriundas do mapeamento realizado para as classes de criticidade do defeito, na segunda fase de implementação, apresentam acurácia de 92% para o critério de continuidade e 68% para o critério de intensidade.
Resumo:
On the basis of aerial photographs of sea ice floes in the marginal ice zone (MIZ) of Prydz Bay acquired from December 2004 to February 2005 during the 21st Chinese National Antarctic Research Expedition, image processing techniques are employed to extract some geometric parameters of floes from two merged transects covering the whole MIZ. Variations of these parameters with the distance into the MIZ are then obtained. Different parameters of floe size, namely area, perimeter, and mean caliper diameter (MCD), follow three similar stages of increasing, flat and increasing again, with distance from the open ocean. Floe shape parameters (roundness and the ratio of perimeter to MCD), however, have less significant variations than that of floe size. Then, to modify the deviation of the cumulative floe size distribution from the ideal power law, an upper truncated power-law function and a Weibull function are used, and four calculated parameters of the above functions are found to be important descriptors of the evolution of floe size distribution in the MIZ. Among them, Lr of the upper truncated power-law function indicates the upper limit of floe size and roughly equals the maximum floe size in each square sample area. L0 in the Weibull distribution shows an increasing proportion of larger floes in squares farther from the open ocean and roughly equals the mean floe size. D in the upper truncated power-law function is closely associated with the degree of confinement during ice breakup. Its decrease with the distance into MIZ indicates the weakening of confinement conditions on floes owing to wave attenuation. The gamma of the Weibull distribution characterizes the degree of homogeneity in a data set. It also decreases with distance into MIZ, implying that floe size distributes increase in range. Finally, a statistical test on floe size is performed to divide the whole MIZ into three distinct zones made up of floes of quite different characteristics. This zonal structure of floe size also agrees well with the trends of floe shape and floe size distribution, and is believed to be a straightforward result of wave-ice interaction in the MIZ.
Resumo:
The Alborz Mountain range separates the northern part of Iran from the southern part. It also isolates a narrow coastal strip to the south of the Caspian Sea from the Central Iran plateau. Communication between the south and north until the 1950's was via two roads and one rail link. In 1963 work was completed on a major access road via the Haraz Valley (the most physically hostile area in the region). From the beginning the road was plagued by accidents resulting from unstable slopes on either side of the valley. Heavy casualties persuaded the government to undertake major engineering works to eliminate ''black spots" and make the road safe. However, despite substantial and prolonged expenditure the problems were not solved and casualties increased steadily due to the increase in traffic using the road. Another road was built to bypass the Haraz road and opened to traffic in 1983. But closure of the Haraz road was still impossible because of the growth of settlements along the route and the need for access to other installations such as the Lar Dam. The aim of this research was to explore the possibility of applying Landsat MSS imagery to locating black spots along the road and the instability problems. Landsat data had not previously been applied to highway engineering problems in the study area. Aerial photographs are better in general than satellite images for detailed mapping, but Landsat images are superior for reconnaissance and adequate for mapping at the 1 :250,000 scale. The broad overview and lack of distortion in the Landsat imagery make the images ideal for structural interpretation. The results of Landsat digital image analysis showed that certain rock types and structural features can be delineated and mapped. The most unstable areas comprising steep slopes, free of vegetation cover can be identified using image processing techniques. Structural lineaments revealed from the image analysis led to improved results (delineation of unstable features). Damavand Quaternary volcanics were found to be the dominant rock type along a 40 km stretch of the road. These rock types are inherently unstable and partly responsible for the difficulties along the road. For more detailed geological and morphological interpretation a sample of small subscenes was selected and analysed. A special developed image analysis package was designed at Aston for use on a non specialized computing system. Using this package a new and unique method for image classification was developed, allowing accurate delineation of the critical features of the study area.
Resumo:
A sizeable amount of the testing in eye care, requires either the identification of targets such as letters to assess functional vision, or the subjective evaluation of imagery by an examiner. Computers can render a variety of different targets on their monitors and can be used to store and analyse ophthalmic images. However, existing computing hardware tends to be large, screen resolutions are often too low, and objective assessments of ophthalmic images unreliable. Recent advances in mobile computing hardware and computer-vision systems can be used to enhance clinical testing in optometry. High resolution touch screens embedded in mobile devices, can render targets at a wide variety of distances and can be used to record and respond to patient responses, automating testing methods. This has opened up new opportunities in computerised near vision testing. Equally, new image processing techniques can be used to increase the validity and reliability of objective computer vision systems. Three novel apps for assessing reading speed, contrast sensitivity and amplitude of accommodation were created by the author to demonstrate the potential of mobile computing to enhance clinical measurement. The reading speed app could present sentences effectively, control illumination and automate the testing procedure for reading speed assessment. Meanwhile the contrast sensitivity app made use of a bit stealing technique and swept frequency target, to rapidly assess a patient’s full contrast sensitivity function at both near and far distances. Finally, customised electronic hardware was created and interfaced to an app on a smartphone device to allow free space amplitude of accommodation measurement. A new geometrical model of the tear film and a ray tracing simulation of a Placido disc topographer were produced to provide insights on the effect of tear film breakdown on ophthalmic images. Furthermore, a new computer vision system, that used a novel eye-lash segmentation technique, was created to demonstrate the potential of computer vision systems for the clinical assessment of tear stability. Studies undertaken by the author to assess the validity and repeatability of the novel apps, found that their repeatability was comparable to, or better, than existing clinical methods for reading speed and contrast sensitivity assessment. Furthermore, the apps offered reduced examination times in comparison to their paper based equivalents. The reading speed and amplitude of accommodation apps correlated highly with existing methods of assessment supporting their validity. Their still remains questions over the validity of using a swept frequency sine-wave target to assess patient’s contrast sensitivity functions as no clinical test provides the range of spatial frequencies and contrasts, nor equivalent assessment at distance and near. A validation study of the new computer vision system found that the authors tear metric correlated better with existing subjective measures of tear film stability than those of a competing computer-vision system. However, repeatability was poor in comparison to the subjective measures due to eye lash interference. The new mobile apps, computer vision system, and studies outlined in this thesis provide further insight into the potential of applying mobile and image processing technology to enhance clinical testing by eye care professionals.
Resumo:
The morphology of asphalt mixture can be defined as a set of parameters describing the geometrical characteristics of its constituent materials, their relative proportions as well as spatial arrangement in the mixture. The present study is carried out to investigate the effect of the morphology on its meso- and macro-mechanical response. An analysis approach is used for the meso-structural characterisation based on the X-ray computed tomography (CT) data. Image processing techniques are used to systematically vary the internal structure to obtain different morphology structures. A morphology framework is used to characterise the average mastic coating thickness around the main load carrying structure in the structures. The uniaxial tension simulation shows that the mixtures with the lowest coating thickness exhibit better inter-particle interaction with more continuous load distribution chains between adjacent aggregate particles, less stress concentrations and less strain localisation in the mastic phase.
Resumo:
Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^
Resumo:
In the shallow continental shelf in Northeastern Rio Grande do Norte - Brazil, important underwater geomorphological features can be found 6km from the coastline. They are coral reefs, locally known as “parrachos”. The present study aims to characterize and analyze the geomorphological feature as well as the ones of the benthic surface, and the distribution of biogenic sediments found in parrachos at Rio do Fogo and associated shallow platforms, by using remote sensing products and in situ data collections. This was made possible due to sedimentological, bathymetric and geomorphological maps elaborated from composite bands of images from the satellite sensors ETM+/Landsat-7, OLI/Landsat-8, MS/GeoEye and PAN/WordView-1, and analysis of bottom sediments samples. These maps were analyzed, integrally interpreted and validated in fieldwork, thus permitting the generation of a new geomorphological zoning of the shallow shelf in study and a geoenvironmental map of the Parrachos in Rio do Fogo. The images used were subject to Digital Image Processing techniques. All obtained data and information were stored in a Geographic Information System (GIS) and can become available to the scientific community. This shallow platform has a carbonate bottom composed mostly by algae. Collected and analyzed sediment samples can be classified as biogenic carbonatic sands, as they are composed 75% by calcareous algae, according to the found samples. The most abundant classes are green algae, red algae, nonbiogenic sediments (mineral grains), ancient algae and molluscs. At the parrachos the following was mapped: Barreta Channel, intertidal reefs, submerged reefs, the spur and grooves, the pools, the sandy bank, the bank of algae, sea grass, submerged roads and Rio do Fogo Channel. This work presents new information about geomorphology and evolution in the study area, and will be guiding future decision making in the handling and environmental management of the region
Resumo:
The increasing in world population, with higher proportion of elderly, leads to an increase in the number of individuals with vision loss and cataracts are one of the leading causes of blindness worldwide. Cataract is an eye disease that is the partial or total opacity of the crystalline lens (natural lens of the eye) or its capsule. It can be triggered by several factors such as trauma, age, diabetes mellitus, and medications, among others. It is known that the attendance by ophthalmologists in rural and poor areas in Brazil is less than needed and many patients with treatable diseases such as cataracts are undiagnosed and therefore untreated. In this context, this project presents the development of OPTICA, a system of teleophthalmology using smartphones for ophthalmic emergencies detection, providing a diagnostic aid for cataract using specialists systems and image processing techniques. The images are captured by a cellphone camera and along with a questionnaire filled with patient information are transmitted securely via the platform Mobile SANA to a online server that has an intelligent system available to assist in the diagnosis of cataract and provides ophthalmologists who analyze the information and write back the patient’s report. Thus, the OPTICA provides eye care to the poorest and least favored population, improving the screening of critically ill patients and increasing access to diagnosis and treatment.
Resumo:
The fluorescent proteins are an essential tool in many fields of biology, since they allow us to watch the development of structures and dynamic processes of cells in living tissue, with the aid of fluorescence microscopy. Optogenectics is another technique that is currently widely used in Neuroscience. In general, this technique allows to activate/deactivate neurons with the radiation of certain wavelengths on the cells that have ion channels sensitive to light, at the same time that can be used with fluorescent proteins. This dissertation has two main objectives. Initially, we study the interaction of light radiation and mice brain tissue to be applied in optogenetic experiments. In this step, we model absorption and scattering effects using mice brain tissue characteristics and Kubelka-Munk theory, for specific wavelengths, as a function of light penetration depth (distance) within the tissue. Furthermore, we model temperature variations using the finite element method to solve Pennes’ bioheat equation, with the aid of COMSOL Multiphysics Modeling Software 4.4, where we simulate protocols of light stimulation tipically used in optogenetics. Subsequently, we develop some computational algorithms to reduce the exposure of neuron cells to the light radiation necessary for the visualization of their emitted fluorescence. At this stage, we describe the image processing techniques developed to be used in fluorescence microscopy to reduce the exposure of the brain samples to continuous light, which is responsible for fluorochrome excitation. The developed techniques are able to track, in real time, a region of interest (ROI) and replace the fluorescence emitted by the cells by a virtual mask, as a result of the overlay of the tracked ROI and the fluorescence information previously stored, preserving cell location, independently of the time exposure to fluorescent light. In summary, this dissertation intends to investigate and describe the effects of light radiation in brain tissue, within the context of Optogenetics, in addition to providing a computational tool to be used in fluorescence microscopy experiments to reduce image bleaching and photodamage due to the intense exposure of fluorescent cells to light radiation.