916 resultados para Digital processing image
                                
Resumo:
MEDEIROS, Rildeci; MELO, Erica S. F.; NASCIMENTO, M. S. Hemeroteca digital temática: socialização da informação em cinema.In:SEMINÁRIO NACIONAL DE BIBLIOTECAS UNIVERSITÁRIAS,15.,2008,São Paulo. Anais eletrônicos... São Paulo:CRUESP,2008. Disponível em: http://www.sbu.unicamp.br/snbu2008/anais/site/pdfs/3018.pdf
                                
Resumo:
AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008
                                
Resumo:
This work presents the results of a survey in oil-producing region of the Macau City, northern coast of Rio Grande do Norte. All work was performed under the Project for Monitoring Environmental Change and the Influence of Hydrodynamic forcing on Morphology Beach Grass Fields, Serra Potiguar in Macau, with the support of the Laboratory of Geoprocessing, linked to PRH22 - Training Program in Geology Geophysics and Information Technology Oil and Gas - Department of Geology/CCET/UFRN and the Post-Graduation in Science and Engineering Oil/PPGCEP/UFRN. Within the economic-ecological context, this paper assesses the importance of mangrove ecosystem in the region of Macau and its surroundings as well as in the following investigative exploration of potential areas for projects involving reforestation and / or Environmental Restoration. At first it was confirmed the ecological potential of mangrove forests, with primary functions: (i) protection and stabilization of the shoreline, (ii) nursery of marine life, and (iii) source of organic matter to aquatic ecosystems, (iv) refuge of species, among others. In the second phase, using Landsat imagery and techniques of Digital Image Processing (DIP), I came across about 18,000 acres of land that can be worked on environmental projects, being inserted in the rules signed the Kyoto Protocol to the market carbon. The results also revealed a total area of 14,723.75 hectares of activity of shrimp production and salting that can be harnessed for the social, economic and environmental potential of the region, considering that over 60% of this area, ie, 8,800 acres, may be used in the planting of the genus Avicennia considered by the literature that the species best sequesters atmospheric carbon, reaching a mean value of 59.79 tons / ha of mangrove
                                
Resumo:
In this work, spoke about the importance of image compression for the industry, it is known that processing and image storage is always a challenge in petrobrás to optimize the storage time and store a maximum number of images and data. We present an interactive system for processing and storing images in the wavelet domain and an interface for digital image processing. The proposal is based on the Peano function and wavelet transform in 1D. The storage system aims to optimize the computational space, both for storage and for transmission of images. Being necessary to the application of the Peano function to linearize the images and the 1D wavelet transform to decompose it. These applications allow you to extract relevant information for the storage of an image with a lower computational cost and with a very small margin of error when comparing the images, original and processed, ie, there is little loss of quality when applying the processing system presented . The results obtained from the information extracted from the images are displayed in a graphical interface. It is through the graphical user interface that the user uses the files to view and analyze the results of the programs directly on the computer screen without the worry of dealing with the source code. The graphical user interface, programs for image processing via Peano Function and Wavelet Transform 1D, were developed in Java language, allowing a direct exchange of information between them and the user
                                
Resumo:
This work aimed to study the reproduction and description of technique for digital sampling images during rats gait and determination of the sciatic functional index (SFI), through a glass walkway to obtain shoots with a digital camera. After controlled injury by strangulation of sciatic nerve, with 3mm of length, during 30 seconds, using hemostatic forceps, a group of 32 rats was performed 24 hours before the lesion which served as control and 24 hours, 7, 14 and 21 days after injury. The tests consisted in the filming and shooting each animal in order to observe the view from below (by a mirror to 45 degrees) and subsequently analyzed using the IMAGE-J program. Measures were taken from the lengths of the legs (right and left), and the distance between the ankle. In the analysis of IFC, values close to zero (0) suggest that the function of the sciatic nerve is still preserved and values coming from "less one hundred" (-100) indicate total loss of function. It was verified in this study that 24 hours before surgery the average SFI was -7.07 +/- 7.88 and 24 hours after injury these values rose to an average of -77.95 +/- 13.81, being about 10 times larger, where 78% of the animals showed 60 to 100% of functional loss in motor activity, demonstrated by the gradual recovery over the days analyzed, confirming the accuracy and effectiveness of the proposed methodology. These results suggest that studies can be conducted to simplify and reduce costs using the technique for digital images of footprints during rats gait in the laboratory.
                                
Resumo:
This work deals with a mathematical fundament for digital signal processing under point view of interval mathematics. Intend treat the open problem of precision and repesention of data in digital systems, with a intertval version of signals representation. Signals processing is a rich and complex area, therefore, this work makes a cutting with focus in systems linear invariant in the time. A vast literature in the area exists, but, some concepts in interval mathematics need to be redefined or to be elaborated for the construction of a solid theory of interval signal processing. We will construct a basic fundaments for signal processing in the interval version, such as basic properties linearity, stability, causality, a version to intervalar of linear systems e its properties. They will be presented interval versions of the convolution and the Z-transform. Will be made analysis of convergences of systems using interval Z-transform , a essentially interval distance, interval complex numbers , application in a interval filter.
                                
Resumo:
This work proposes the development of a Computer System for Analysis of Mammograms SCAM, that aids the doctor specialist in the identification and analysis of existent lesions in digital mammograms. The computer system for digital mammograms processing will make use of a group of techniques of Digital Image Processing (DIP), with the purpose of aiding the medical professional to extract the information contained in the mammogram. This system possesses an interface of easy use for the user, allowing, starting from the supplied mammogram, a group of processing operations, such as, the enrich of the images through filtering techniques, the segmentation of areas of the mammogram, the calculation the area of the lesions, thresholding the lesion, and other important tools for the medical professional's diagnosis. The Wavelet Transform will used and integrated into the computer system, with the objective of allowing a multiresolution analysis, thus supplying a method for identifying and analyzing microcalcifications
                                
Resumo:
Image segmentation is one of the image processing problems that deserves special attention from the scientific community. This work studies unsupervised methods to clustering and pattern recognition applicable to medical image segmentation. Natural Computing based methods have shown very attractive in such tasks and are studied here as a way to verify it's applicability in medical image segmentation. This work treats to implement the following methods: GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm), PSOKA (PSO and K-means based Clustering Algorithm) and PSOFCM (PSO and FCM based Clustering Algorithm). Besides, as a way to evaluate the results given by the algorithms, clustering validity indexes are used as quantitative measure. Visual and qualitative evaluations are realized also, mainly using data given by the BrainWeb brain simulator as ground truth
                                
Resumo:
Several methods of mobile robot navigation request the mensuration of robot position and orientation in its workspace. In the wheeled mobile robot case, techniques based on odometry allow to determine the robot localization by the integration of incremental displacements of its wheels. However, this technique is subject to errors that accumulate with the distance traveled by the robot, making unfeasible its exclusive use. Other methods are based on the detection of natural or artificial landmarks present in the environment and whose location is known. This technique doesnt generate cumulative errors, but it can request a larger processing time than the methods based on odometry. Thus, many methods make use of both techniques, in such a way that the odometry errors are periodically corrected through mensurations obtained from landmarks. Accordding to this approach, this work proposes a hybrid localization system for wheeled mobile robots in indoor environments based on odometry and natural landmarks. The landmarks are straight lines de.ned by the junctions in environments floor, forming a bi-dimensional grid. The landmark detection from digital images is perfomed through the Hough transform. Heuristics are associated with that transform to allow its application in real time. To reduce the search time of landmarks, we propose to map odometry errors in an area of the captured image that possesses high probability of containing the sought mark
                                
Resumo:
Objetivou-se, com o trabalho, avaliar dois métodos de estimativa da área foliar, em plantas de laranja Pêra, pela análise da imagem digital obtida com scanner e câmera fotográfica digital. Para determinar a área das folhas, um grupo de discos foi colocado sobre um leitor de scanner, sendo que a imagem obtida foi armazenada. Os mesmos grupos de discos foram fixados sobre cartolina branca e fotografados com câmera fotográfica digital. As imagens obtidas da câmera fotográfica e do scanner foram processadas utilizando ferramentas de um editor de imagem que permite a contagem de pixels de determinada cor, no caso verde. Para a comparação dos métodos, os discos foram submetidos a integrador óptico de área foliar modelo LICOR-3100, utilizando os mesmos agrupamentos. Foram coletadas 20 folhas (cinco em cada quadrante da planta) por parcela de um experimento para comparação de fertilizantes comerciais e doses de zinco, aplicados via foliar, em plantas de sete anos de idade. O experimento foi composto de sete tratamentos e quatro repetições, num total de 28 parcelas. Os dois métodos apresentaram alta correlação com a área estimada pelo integrador óptico de área, considerado como método de referência. O método da análise da imagem obtida com câmera fotográfica, na resolução de 5.0 megapixel, foi mais precisa quando comparada à área estimada pelo integrador óptico de área.
                                
Resumo:
Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required
                                
Resumo:
The vision is one of the five senses of the human body and, in children is responsible for up to 80% of the perception of world around. Studies show that 50% of children with multiple disabilities have some visual impairment, and 4% of all children are diagnosed with strabismus. The strabismus is an eye disability associated with handling capacity of the eye, defined as any deviation from perfect ocular alignment. Besides of aesthetic aspect, the child may report blurred or double vision . Ophthalmological cases not diagnosed correctly are reasons for many school abandonments. The Ministry of Education of Brazil points to the visually impaired as a challenge to the educators of children, particularly in literacy process. The traditional eye examination for diagnosis of strabismus can be accomplished by inducing the eye movements through the doctor s instructions to the patient. This procedure can be played through the computer aided analysis of images captured on video. This paper presents a proposal for distributed system to assist health professionals in remote diagnosis of visual impairment associated with motor abilities of the eye, such as strabismus. It is hoped through this proposal to contribute improving the rates of school learning for children, allowing better diagnosis and, consequently, the student accompaniment
                                
Resumo:
Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)
                                
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Sistema inteligente para detecção de manchas de óleo na superfície marinha através de imagens de SAR
                                
Resumo:
Oil spill on the sea, accidental or not, generates enormous negative consequences for the affected area. The damages are ambient and economic, mainly with the proximity of these spots of preservation areas and/or coastal zones. The development of automatic techniques for identification of oil spots on the sea surface, captured through Radar images, assist in a complete monitoring of the oceans and seas. However spots of different origins can be visualized in this type of imaging, which is a very difficult task. The system proposed in this work, based on techniques of digital image processing and artificial neural network, has the objective to identify the analyzed spot and to discern between oil and other generating phenomena of spot. Tests in functional blocks that compose the proposed system allow the implementation of different algorithms, as well as its detailed and prompt analysis. The algorithms of digital image processing (speckle filtering and gradient), as well as classifier algorithms (Multilayer Perceptron, Radial Basis Function, Support Vector Machine and Committe Machine) are presented and commented.The final performance of the system, with different kind of classifiers, is presented by ROC curve. The true positive rates are considered agreed with the literature about oil slick detection through SAR images presents
 
                    