916 resultados para Processamento de imagens multitemporais
Resumo:
In this paper,we present a novel texture analysis method based on deterministic partially self-avoiding walks and fractal dimension theory. After finding the attractors of the image (set of pixels) using deterministic partially self-avoiding walks, they are dilated in direction to the whole image by adding pixels according to their relevance. The relevance of each pixel is calculated as the shortest path between the pixel and the pixels that belongs to the attractors. The proposed texture analysis method is demonstrated to outperform popular and state-of-the-art methods (e.g. Fourier descriptors, occurrence matrix, Gabor filter and local binary patterns) as well as deterministic tourist walk method and recent fractal methods using well-known texture image datasets.
Resumo:
Dynamic texture is a recent field of investigation that has received growing attention from computer vision community in the last years. These patterns are moving texture in which the concept of selfsimilarity for static textures is extended to the spatiotemporal domain. In this paper, we propose a novel approach for dynamic texture representation, that can be used for both texture analysis and segmentation. In this method, deterministic partially self-avoiding walks are performed in three orthogonal planes of the video in order to combine appearance and motion features. We validate our method on three applications of dynamic texture that present interesting challenges: recognition, clustering and segmentation. Experimental results on these applications indicate that the proposed method improves the dynamic texture representation compared to the state of the art.
Resumo:
Assunto bastante abordado quando se trata de Sistemas Inteligentes de Transportes (ITS), a identificação veicular - utilizada em grande parte das aplicações de ITS deve ser entendida como um conjunto de recursos de hardware, software e telecomunicações, que interagem para atingir, do ponto de vista funcional, o objetivo de, conseguir extrair e transmitir, digitalmente, a identidade de um veículo. É feita tanto por sistemas que transmitem e recebem uma identidade digital quanto por sistemas que, instalados na infraestrutura da via, são capazes de reconhecer a placa dos veículos circulantes. Quando se trata da identificação automática por meio do reconhecimento da placa veicular, os estudos têm se concentrado sobremaneira nas tecnologias de processamento de imagens, não abordando - em sua maioria - uma visão sistêmica, necessária para compreender de maneira mais abrangente todas as variáveis que podem interferir na eficácia da identificação. Com o objetivo de contribuir para melhor entender e utilizar os sistemas de reconhecimento automático de placas veiculares, este trabalho propõe um modelo sistêmico, em camadas, para representar seus componentes. Associada a esse modelo, propõe uma classificação para os diversos tipos de falhas que podem prejudicar seu desempenho. Uma análise desenvolvida com resultados obtidos em testes realizados em campo com sistemas de identificação de placas voltados à fiscalização de veículos aponta resultados relevantes e limitações para obter correlações entre variáveis, em função dos diversos fatores que podem influenciar os resultados. Algumas entrevistas realizadas apontam os tipos de falhas que ocorrem com mais frequência durante a operação desses sistemas. Finalmente, este trabalho propõe futuros estudos e apresenta um glossário de termos, que poderá ser útil a novos pesquisadores.
Resumo:
Uma grande diversidade de macrofibras poliméricas para reforço de concreto se encontram disponibilizadas hoje em dia. Por natureza estas fibras apresentam grande diversidade de características e propriedades. Estas variações afetam sua atuação como reforço no concreto. No entanto, não há normas brasileiras sobre o assunto e as metodologias de caracterização de normas estrangeiras apresentam divergências. Algumas normas definem que a caracterização do comportamento mecânico deva ser feita nos fios originais e outras que se devam utilizar métodos definidos para caracterização de materiais metálicos. A norma EN14889-2:2006 apresenta maior abrangência, mas deixa dúvidas quanto à adequação dos critérios de caracterização geométrica das fibras e não define um método de ensaio específico para sua caracterização mecânica. Assim, há a necessidade de estabelecimento de uma metodologia que permita a realização de um programa de controle de qualidade da fibra nas condições de emprego. Esta metodologia também proporcionaria uma forma de caracterização do material para estudos experimentais, o que permitiria maior fundamentação científica desses trabalhos que, frequentemente, fundamentam-se apenas em dados dos fabricantes. Assim, foi desenvolvido um estudo experimental focando a caracterização de duas macrofibras poliméricas disponíveis no mercado brasileiro. Focou-se o estudo na determinação dos parâmetros geométricos e na caracterização mecânica através da determinação da resistência à tração e avaliação do módulo de elasticidade. Na caracterização geométrica foi adotada como referência a norma europeia EN14889-2:2006. As medições do comprimento se efetuaram por dois métodos: o método do paquímetro e o método de análise de imagens digitais, empregando um software para processamento das imagens. Para a medição do diâmetro, além das metodologias mencionadas, foi usado o método da densidade. Conclui-se que o método do paquímetro, com o cuidado de esticar previamente as macrofibras, e o método das imagens digitais podem ser igualmente utilizados para medir o comprimento. Já parar determinar o diâmetro, recomenda-se o método da densidade. Quanto à caracterização mecânica, foi desenvolvida uma metodologia própria a partir de informações obtidas de outros ensaios. Assim, efetuaram-se ensaios de tração direta nas macrofibras coladas em molduras de tecido têxtil. Complementarmente, foi avaliado também o efeito do contato abrasivo das macrofibras com os agregados durante a mistura em betoneira no comportamento mecânico do material. Também se avaliou o efeito do método de determinação da área da seção transversal nos resultados medidos no ensaio de tração da fibra. Conclui-se que o método proposto para o ensaio de tração direta da fibra é viável, especialmente para a determinação da resistência à tração. O valor do módulo de elasticidade, por sua vez, acaba sendo subestimado. A determinação da área da seção da fibra através do método da densidade forneceu também os melhores resultados. Além disso, comprovou-se que o atrito das fibras com o agregado durante a mistura compromete o comportamento mecânico, reduzindo tanto a resistência quanto o módulo de elasticidade. Assim, pode-se afirmar que a metodologia proposta para o controle geométrico e mecânico das macrofibras poliméricas é adequada para a caracterização do material.
Resumo:
Mathematical Morphology presents a systematic approach to extract geometric features of binary images, using morphological operators that transform the original image into another by means of a third image called structuring element and came out in 1960 by researchers Jean Serra and George Matheron. Fuzzy mathematical morphology extends the operators towards grayscale and color images and was initially proposed by Goetherian using fuzzy logic. Using this approach it is possible to make a study of fuzzy connectives, which allows some scope for analysis for the construction of morphological operators and their applicability in image processing. In this paper, we propose the development of morphological operators fuzzy using the R-implications for aid and improve image processing, and then to build a system with these operators to count the spores mycorrhizal fungi and red blood cells. It was used as the hypothetical-deductive methodologies for the part formal and incremental-iterative for the experimental part. These operators were applied in digital and microscopic images. The conjunctions and implications of fuzzy morphology mathematical reasoning will be used in order to choose the best adjunction to be applied depending on the problem being approached, i.e., we will use automorphisms on the implications and observe their influence on segmenting images and then on their processing. In order to validate the developed system, it was applied to counting problems in microscopic images, extending to pathological images. It was noted that for the computation of spores the best operator was the erosion of Gödel. It developed three groups of morphological operators fuzzy, Lukasiewicz, And Godel Goguen that can have a variety applications
Resumo:
The increasing in world population, with higher proportion of elderly, leads to an increase in the number of individuals with vision loss and cataracts are one of the leading causes of blindness worldwide. Cataract is an eye disease that is the partial or total opacity of the crystalline lens (natural lens of the eye) or its capsule. It can be triggered by several factors such as trauma, age, diabetes mellitus, and medications, among others. It is known that the attendance by ophthalmologists in rural and poor areas in Brazil is less than needed and many patients with treatable diseases such as cataracts are undiagnosed and therefore untreated. In this context, this project presents the development of OPTICA, a system of teleophthalmology using smartphones for ophthalmic emergencies detection, providing a diagnostic aid for cataract using specialists systems and image processing techniques. The images are captured by a cellphone camera and along with a questionnaire filled with patient information are transmitted securely via the platform Mobile SANA to a online server that has an intelligent system available to assist in the diagnosis of cataract and provides ophthalmologists who analyze the information and write back the patient’s report. Thus, the OPTICA provides eye care to the poorest and least favored population, improving the screening of critically ill patients and increasing access to diagnosis and treatment.
Resumo:
This Thesis main objective is to implement a supporting architecture to Autonomic Hardware systems, capable of manage the hardware running in reconfigurable devices. The proposed architecture implements manipulation, generation and communication functionalities, using the Context Oriented Active Repository approach. The solution consists in a Hardware-Software based architecture called "Autonomic Hardware Manager (AHM)" that contains an Active Repository of Hardware Components. Using the repository the architecture will be able to manage the connected systems at run time allowing the implementation of autonomic features such as self-management, self-optimization, self-description and self-configuration. The proposed architecture also contains a meta-model that allows the representation of the Operating Context for hardware systems. This meta-model will be used as basis to the context sensing modules, that are needed in the Active Repository architecture. In order to demonstrate the proposed architecture functionalities, experiments were proposed and implemented in order to proof the Thesis hypothesis and achieved objectives. Three experiments were planned and implemented: the Hardware Reconfigurable Filter, that consists of an application that implements Digital Filters using reconfigurable hardware; the Autonomic Image Segmentation Filter, that shows the project and implementation of an image processing autonomic application; finally, the Autonomic Autopilot application that consist of an auto pilot to unmanned aerial vehicles. In this work, the applications architectures were organized in modules, according their functionalities. Some modules were implemented using HDL and synthesized in hardware. Other modules were implemented kept in software. After that, applications were integrated to the AHM to allow their adaptation to different Operating Context, making them autonomic.
Resumo:
The fluorescent proteins are an essential tool in many fields of biology, since they allow us to watch the development of structures and dynamic processes of cells in living tissue, with the aid of fluorescence microscopy. Optogenectics is another technique that is currently widely used in Neuroscience. In general, this technique allows to activate/deactivate neurons with the radiation of certain wavelengths on the cells that have ion channels sensitive to light, at the same time that can be used with fluorescent proteins. This dissertation has two main objectives. Initially, we study the interaction of light radiation and mice brain tissue to be applied in optogenetic experiments. In this step, we model absorption and scattering effects using mice brain tissue characteristics and Kubelka-Munk theory, for specific wavelengths, as a function of light penetration depth (distance) within the tissue. Furthermore, we model temperature variations using the finite element method to solve Pennes’ bioheat equation, with the aid of COMSOL Multiphysics Modeling Software 4.4, where we simulate protocols of light stimulation tipically used in optogenetics. Subsequently, we develop some computational algorithms to reduce the exposure of neuron cells to the light radiation necessary for the visualization of their emitted fluorescence. At this stage, we describe the image processing techniques developed to be used in fluorescence microscopy to reduce the exposure of the brain samples to continuous light, which is responsible for fluorochrome excitation. The developed techniques are able to track, in real time, a region of interest (ROI) and replace the fluorescence emitted by the cells by a virtual mask, as a result of the overlay of the tracked ROI and the fluorescence information previously stored, preserving cell location, independently of the time exposure to fluorescent light. In summary, this dissertation intends to investigate and describe the effects of light radiation in brain tissue, within the context of Optogenetics, in addition to providing a computational tool to be used in fluorescence microscopy experiments to reduce image bleaching and photodamage due to the intense exposure of fluorescent cells to light radiation.
Resumo:
Lung cancer is one of the most common types of cancer and has the highest mortality rate. Patient survival is highly correlated with early detection. Computed Tomography technology services the early detection of lung cancer tremendously by offering aminimally invasive medical diagnostic tool. However, the large amount of data per examination makes the interpretation difficult. This leads to omission of nodules by human radiologist. This thesis presents a development of a computer-aided diagnosis system (CADe) tool for the detection of lung nodules in Computed Tomography study. The system, called LCD-OpenPACS (Lung Cancer Detection - OpenPACS) should be integrated into the OpenPACS system and have all the requirements for use in the workflow of health facilities belonging to the SUS (Brazilian health system). The LCD-OpenPACS made use of image processing techniques (Region Growing and Watershed), feature extraction (Histogram of Gradient Oriented), dimensionality reduction (Principal Component Analysis) and classifier (Support Vector Machine). System was tested on 220 cases, totaling 296 pulmonary nodules, with sensitivity of 94.4% and 7.04 false positives per case. The total time for processing was approximately 10 minutes per case. The system has detected pulmonary nodules (solitary, juxtavascular, ground-glass opacity and juxtapleural) between 3 mm and 30 mm.
Resumo:
The localization of mobile robots in indoor environments finds lots of problems such as accumulated errors and the constant changes that occur at these places. A technique called global vision intends to localize robots using images acquired by cameras placed in such a way that covers the place where the robots movement takes place. Localization is obtained by marks put on top of the robot. Algorithms applied to the images search for the mark on top of the robot and by finding the mark they are able to get the position and orientation of the robot. Such techniques used to face some difficulties related with the hardware capacity, fact that limited their execution in real time. However, the technological advances of the last years changed that situation and enabling the development and execution of such algorithms in plain capacity. The proposal specified here intends to develop a mobile robot localization system at indoor environments using a technique called global vision to track the robot and acquire the images, all in real time, intending to improve the robot localization process inside the environment. Being a localization method that takes just actual information in its calculations, the robot localization using images fit into the needs of this kind of place. Besides, it enables more accurate results and in real time, what is exactly the museum application needs.
Resumo:
The content-based image retrieval is important for various purposes like disease diagnoses from computerized tomography, for example. The relevance, social and economic of image retrieval systems has created the necessity of its improvement. Within this context, the content-based image retrieval systems are composed of two stages, the feature extraction and similarity measurement. The stage of similarity is still a challenge due to the wide variety of similarity measurement functions, which can be combined with the different techniques present in the recovery process and return results that aren’t always the most satisfactory. The most common functions used to measure the similarity are the Euclidean and Cosine, but some researchers have noted some limitations in these functions conventional proximity, in the step of search by similarity. For that reason, the Bregman divergences (Kullback Leibler and I-Generalized) have attracted the attention of researchers, due to its flexibility in the similarity analysis. Thus, the aim of this research was to conduct a comparative study over the use of Bregman divergences in relation the Euclidean and Cosine functions, in the step similarity of content-based image retrieval, checking the advantages and disadvantages of each function. For this, it was created a content-based image retrieval system in two stages: offline and online, using approaches BSM, FISM, BoVW and BoVW-SPM. With this system was created three groups of experiments using databases: Caltech101, Oxford and UK-bench. The performance of content-based image retrieval system using the different functions of similarity was tested through of evaluation measures: Mean Average Precision, normalized Discounted Cumulative Gain, precision at k, precision x recall. Finally, this study shows that the use of Bregman divergences (Kullback Leibler and Generalized) obtains better results than the Euclidean and Cosine measures with significant gains for content-based image retrieval.
Resumo:
lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.
Resumo:
In several areas of health professionals (pediatricians, nutritionists, orthopedists, endocrinologists, dentists, etc.) are used in the assessment of bone age to diagnose growth disorders in children. Through interviews with specialists in diagnostic imaging and research done in the literature, we identified the TW method - Tanner and Whitehouse as the most efficient. Even achieving better results than other methods, it is still not the most used, due to the complexity of their use. This work presents the possibility of automation of this method and therefore that its use more widespread. Also in this work, they are met two important steps in the evaluation of bone age, identification and classification of regions of interest. Even in the radiography in which the positioning of the hands were not suitable for TW method, the identification algorithm of the fingers showed good results. As the use AAM - Active Appearance Models showed good results in the identification of regions of interest even in radiographs with high contrast and brightness variation. It has been shown through appearance, good results in the classification of the epiphysis in their stages of development, being chosen the average epiphysis finger III (middle) to show the performance. The final results show an average percentage of 90% hit and misclassified, it was found that the error went away just one stage of the correct stage.
Resumo:
Objetivo: Trabalho realizado em ratos com o objetivo de estudar o efeito do Fator de Crescimento de Fibroblastos básico (FCFb) na cicatrização da aponeurose abdominal. Métodos: Foram usados 20 ratos Wistar separados aleatoriamente em 2 grupos iguais. Os animais foram anestesiados com pentobarbital sódico na dose de 20 mg/Kg por via intraperitoneal e submetidos a laparotomia mediana de 4 cm, cuja camada aponeurótica foi suturada com mononylon 5-0. No grupo I foi aplicada a dose de 5mg de FCFb sobre a sutura da aponeurose. No grupo II (controle) foi aplicada solução salina 0,9% sobre a linha se sutura. Após observação por 7 dias os animais foram mortos com superdose de anestésico. A camada aponeurótica com 1,5 cm de largura foi submetida a teste de resistência à tensão empregando a Máquina de Ensaios EMIC MF500. Biópsias das zonas de sutura foram processadas e coradas com HE e o tricômico de Masson. Os achados histopatológicos foram quantificados através de sistema digital (Image pro-plus) de captura e processamento de imagens. Os dados obtidos foram analisados pelo teste T com significância 0,05. Resultados: Nos animais do grupo I (experimental) a zona de sutura da camada aponeurótica suportou a carga de 1.103±103,39gf. A quantificação dos dados histopatológicos desse grupo atingiu a densidade média 226±29,32. No grupo II (controle) a carga suportada pela zona de sutura foi de 791,1±92,77 gf. Quando foram comparadas as médias das resistências à tensão dos dois grupos, observou-se uma diferença significante (p<0,01). O exame histopatológico das lâminas desse grupo relevou densidade média 114,1±17,01, correspondendo a uma diferença significante quando comparadas as médias dos dois grupos (p<0,01). Conclusão: Os dados permitem concluir que o FCFb contribuiu para aumentar a resistência da aponeurose suturada e para melhorar os parâmetros histopatológicos da cicatrização.
Resumo:
Objetivo: Trabalho realizado em ratos com o objetivo de estudar o efeito do Fator de Crescimento de Fibroblastos básico (FCFb) na cicatrização da aponeurose abdominal. Métodos: Foram usados 20 ratos Wistar separados aleatoriamente em 2 grupos iguais. Os animais foram anestesiados com pentobarbital sódico na dose de 20 mg/Kg por via intraperitoneal e submetidos a laparotomia mediana de 4 cm, cuja camada aponeurótica foi suturada com mononylon 5-0. No grupo I foi aplicada a dose de 5mg de FCFb sobre a sutura da aponeurose. No grupo II (controle) foi aplicada solução salina 0,9% sobre a linha se sutura. Após observação por 7 dias os animais foram mortos com superdose de anestésico. A camada aponeurótica com 1,5 cm de largura foi submetida a teste de resistência à tensão empregando a Máquina de Ensaios EMIC MF500. Biópsias das zonas de sutura foram processadas e coradas com HE e o tricômico de Masson. Os achados histopatológicos foram quantificados através de sistema digital (Image pro-plus) de captura e processamento de imagens. Os dados obtidos foram analisados pelo teste T com significância 0,05. Resultados: Nos animais do grupo I (experimental) a zona de sutura da camada aponeurótica suportou a carga de 1.103±103,39gf. A quantificação dos dados histopatológicos desse grupo atingiu a densidade média 226±29,32. No grupo II (controle) a carga suportada pela zona de sutura foi de 791,1±92,77 gf. Quando foram comparadas as médias das resistências à tensão dos dois grupos, observou-se uma diferença significante (p<0,01). O exame histopatológico das lâminas desse grupo relevou densidade média 114,1±17,01, correspondendo a uma diferença significante quando comparadas as médias dos dois grupos (p<0,01). Conclusão: Os dados permitem concluir que o FCFb contribuiu para aumentar a resistência da aponeurose suturada e para melhorar os parâmetros histopatológicos da cicatrização.