35 resultados para Digital processing image
Resumo:
AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008
Resumo:
This work presents the results of a survey in oil-producing region of the Macau City, northern coast of Rio Grande do Norte. All work was performed under the Project for Monitoring Environmental Change and the Influence of Hydrodynamic forcing on Morphology Beach Grass Fields, Serra Potiguar in Macau, with the support of the Laboratory of Geoprocessing, linked to PRH22 - Training Program in Geology Geophysics and Information Technology Oil and Gas - Department of Geology/CCET/UFRN and the Post-Graduation in Science and Engineering Oil/PPGCEP/UFRN. Within the economic-ecological context, this paper assesses the importance of mangrove ecosystem in the region of Macau and its surroundings as well as in the following investigative exploration of potential areas for projects involving reforestation and / or Environmental Restoration. At first it was confirmed the ecological potential of mangrove forests, with primary functions: (i) protection and stabilization of the shoreline, (ii) nursery of marine life, and (iii) source of organic matter to aquatic ecosystems, (iv) refuge of species, among others. In the second phase, using Landsat imagery and techniques of Digital Image Processing (DIP), I came across about 18,000 acres of land that can be worked on environmental projects, being inserted in the rules signed the Kyoto Protocol to the market carbon. The results also revealed a total area of 14,723.75 hectares of activity of shrimp production and salting that can be harnessed for the social, economic and environmental potential of the region, considering that over 60% of this area, ie, 8,800 acres, may be used in the planting of the genus Avicennia considered by the literature that the species best sequesters atmospheric carbon, reaching a mean value of 59.79 tons / ha of mangrove
Resumo:
In this work, spoke about the importance of image compression for the industry, it is known that processing and image storage is always a challenge in petrobrás to optimize the storage time and store a maximum number of images and data. We present an interactive system for processing and storing images in the wavelet domain and an interface for digital image processing. The proposal is based on the Peano function and wavelet transform in 1D. The storage system aims to optimize the computational space, both for storage and for transmission of images. Being necessary to the application of the Peano function to linearize the images and the 1D wavelet transform to decompose it. These applications allow you to extract relevant information for the storage of an image with a lower computational cost and with a very small margin of error when comparing the images, original and processed, ie, there is little loss of quality when applying the processing system presented . The results obtained from the information extracted from the images are displayed in a graphical interface. It is through the graphical user interface that the user uses the files to view and analyze the results of the programs directly on the computer screen without the worry of dealing with the source code. The graphical user interface, programs for image processing via Peano Function and Wavelet Transform 1D, were developed in Java language, allowing a direct exchange of information between them and the user
Resumo:
This work deals with a mathematical fundament for digital signal processing under point view of interval mathematics. Intend treat the open problem of precision and repesention of data in digital systems, with a intertval version of signals representation. Signals processing is a rich and complex area, therefore, this work makes a cutting with focus in systems linear invariant in the time. A vast literature in the area exists, but, some concepts in interval mathematics need to be redefined or to be elaborated for the construction of a solid theory of interval signal processing. We will construct a basic fundaments for signal processing in the interval version, such as basic properties linearity, stability, causality, a version to intervalar of linear systems e its properties. They will be presented interval versions of the convolution and the Z-transform. Will be made analysis of convergences of systems using interval Z-transform , a essentially interval distance, interval complex numbers , application in a interval filter.
Resumo:
This work proposes the development of a Computer System for Analysis of Mammograms SCAM, that aids the doctor specialist in the identification and analysis of existent lesions in digital mammograms. The computer system for digital mammograms processing will make use of a group of techniques of Digital Image Processing (DIP), with the purpose of aiding the medical professional to extract the information contained in the mammogram. This system possesses an interface of easy use for the user, allowing, starting from the supplied mammogram, a group of processing operations, such as, the enrich of the images through filtering techniques, the segmentation of areas of the mammogram, the calculation the area of the lesions, thresholding the lesion, and other important tools for the medical professional's diagnosis. The Wavelet Transform will used and integrated into the computer system, with the objective of allowing a multiresolution analysis, thus supplying a method for identifying and analyzing microcalcifications
Resumo:
Image segmentation is one of the image processing problems that deserves special attention from the scientific community. This work studies unsupervised methods to clustering and pattern recognition applicable to medical image segmentation. Natural Computing based methods have shown very attractive in such tasks and are studied here as a way to verify it's applicability in medical image segmentation. This work treats to implement the following methods: GKA (Genetic K-means Algorithm), GFCMA (Genetic FCM Algorithm), PSOKA (PSO and K-means based Clustering Algorithm) and PSOFCM (PSO and FCM based Clustering Algorithm). Besides, as a way to evaluate the results given by the algorithms, clustering validity indexes are used as quantitative measure. Visual and qualitative evaluations are realized also, mainly using data given by the BrainWeb brain simulator as ground truth
Resumo:
Several methods of mobile robot navigation request the mensuration of robot position and orientation in its workspace. In the wheeled mobile robot case, techniques based on odometry allow to determine the robot localization by the integration of incremental displacements of its wheels. However, this technique is subject to errors that accumulate with the distance traveled by the robot, making unfeasible its exclusive use. Other methods are based on the detection of natural or artificial landmarks present in the environment and whose location is known. This technique doesnt generate cumulative errors, but it can request a larger processing time than the methods based on odometry. Thus, many methods make use of both techniques, in such a way that the odometry errors are periodically corrected through mensurations obtained from landmarks. Accordding to this approach, this work proposes a hybrid localization system for wheeled mobile robots in indoor environments based on odometry and natural landmarks. The landmarks are straight lines de.ned by the junctions in environments floor, forming a bi-dimensional grid. The landmark detection from digital images is perfomed through the Hough transform. Heuristics are associated with that transform to allow its application in real time. To reduce the search time of landmarks, we propose to map odometry errors in an area of the captured image that possesses high probability of containing the sought mark
Resumo:
Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required
Resumo:
The vision is one of the five senses of the human body and, in children is responsible for up to 80% of the perception of world around. Studies show that 50% of children with multiple disabilities have some visual impairment, and 4% of all children are diagnosed with strabismus. The strabismus is an eye disability associated with handling capacity of the eye, defined as any deviation from perfect ocular alignment. Besides of aesthetic aspect, the child may report blurred or double vision . Ophthalmological cases not diagnosed correctly are reasons for many school abandonments. The Ministry of Education of Brazil points to the visually impaired as a challenge to the educators of children, particularly in literacy process. The traditional eye examination for diagnosis of strabismus can be accomplished by inducing the eye movements through the doctor s instructions to the patient. This procedure can be played through the computer aided analysis of images captured on video. This paper presents a proposal for distributed system to assist health professionals in remote diagnosis of visual impairment associated with motor abilities of the eye, such as strabismus. It is hoped through this proposal to contribute improving the rates of school learning for children, allowing better diagnosis and, consequently, the student accompaniment
Resumo:
Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Sistema inteligente para detecção de manchas de óleo na superfície marinha através de imagens de SAR
Resumo:
Oil spill on the sea, accidental or not, generates enormous negative consequences for the affected area. The damages are ambient and economic, mainly with the proximity of these spots of preservation areas and/or coastal zones. The development of automatic techniques for identification of oil spots on the sea surface, captured through Radar images, assist in a complete monitoring of the oceans and seas. However spots of different origins can be visualized in this type of imaging, which is a very difficult task. The system proposed in this work, based on techniques of digital image processing and artificial neural network, has the objective to identify the analyzed spot and to discern between oil and other generating phenomena of spot. Tests in functional blocks that compose the proposed system allow the implementation of different algorithms, as well as its detailed and prompt analysis. The algorithms of digital image processing (speckle filtering and gradient), as well as classifier algorithms (Multilayer Perceptron, Radial Basis Function, Support Vector Machine and Committe Machine) are presented and commented.The final performance of the system, with different kind of classifiers, is presented by ROC curve. The true positive rates are considered agreed with the literature about oil slick detection through SAR images presents
Resumo:
There has been an increasing tendency on the use of selective image compression, since several applications make use of digital images and the loss of information in certain regions is not allowed in some cases. However, there are applications in which these images are captured and stored automatically making it impossible to the user to select the regions of interest to be compressed in a lossless manner. A possible solution for this matter would be the automatic selection of these regions, a very difficult problem to solve in general cases. Nevertheless, it is possible to use intelligent techniques to detect these regions in specific cases. This work proposes a selective color image compression method in which regions of interest, previously chosen, are compressed in a lossless manner. This method uses the wavelet transform to decorrelate the pixels of the image, competitive neural network to make a vectorial quantization, mathematical morphology, and Huffman adaptive coding. There are two options for automatic detection in addition to the manual one: a method of texture segmentation, in which the highest frequency texture is selected to be the region of interest, and a new face detection method where the region of the face will be lossless compressed. The results show that both can be successfully used with the compression method, giving the map of the region of interest as an input
Resumo:
This thesis studies the use of argumentation as a discursive element in digital media, particularly blogs. We analyzed the Blog "Fatos e Dados" [Facts and Data], created by Petrobras in the context of allegations of corruption that culminated in the installation of a Parliamentary Commission of Inquiry to investigate the company within the Congress. We intend to understand the influence that the discursive elements triggered by argumentation exercise in blogs and about themes scheduling. To this end, we work with notions of argumentation in dialogue with questions of language and discourse from the work of Charaudeau (2006), Citelli (2007), Perelman & Olbrechts-Tyteca (2005), Foucault (2007, 2008a), Bakhtin (2006) and Breton (2003). We also observe our subject from the perspective of social representations, where we seek to clarify concepts such as public image and the use of representations as argumentative elements, considering the work of Moscovici (2007). We also consider reflections about hypertext and the context of cyberculture, with authors such as Levy (1993, 1999, 2003), Castells (2003) and Chartier (1999 and 2002), and issues of discourse analysis, especially in Orlandi (1988, 1989, 1996 and 2001), as well as Foucault (2008b). We analyzed 118 posts published in the first 30 days of existence of the blog "Fatos e Dados" (between 2 June and 1 July 2009), and analyzed in detail the top ten. A corporate blog aims to defend the points of view and public image of the organization, and, therefore, uses elements of social representations to build their arguments. It goes beyond the blog, as the main news criteria, including the posts we reviewed, the credibility of Petrobras as the source of information. In the posts analyzed, the news values of innovation and relevance also arise. The controversy between the Blog and the press resulted from an inadequacy and lack of preparation of media to deal with a corporate blog that was able to explore the characteristics of liberation of the emission pole in cyberculture. The Blog is a discursive manifestation in a concrete historical situation, whose understanding and attribution of meaning takes place from the social relations between subjects that, most of the time, place themselves in discursive and ideological dispute between each other - this dispute also affects the movements of reading and reading production. We conclude that intersubjective relationships that occur in blogs change, in the form of argumentative techniques used, the notions of news criteria, interfering with scheduling of news and organization of information in digital media outlets. It is also clear the influence that the discursive elements triggered by argumentation exercise in digital media, trying to resize and reframe frames of reality conveyed by it in relation to the subject-readers. Blogs have become part of the scenario information with the emergence of the Internet and are able to interfere in a more effective way to organize the scheduling of media from the conscious utilization of argumentative elements in their posts
Resumo:
The studied region, named Forquilha and localized in northwestern Central Ceará domain (northern portion of Borborema Province), presents a lithostratigraphic framework constituted by paleoproterozoic metaplutonics, metasedimentary sequences and neoproterozoic granitoids. The metasedimentary rocks of Ceará group occupy most part of the area. This group is subdivided in two distinct units: Canindé and Independência. Canindé unit is represented basically by biotite paragneisses and muscovite paragneisses, with minor metabasic rocks (amphibolite lens). Independência sequence is composed by garnetiferous paragneisses, sillimanite-garnet-quartz-muscovite schists and quartz-muscovite schists, pure or muscovite quartzites and rare marbles. At least three ductile deformation events were recognized in both units of Ceará group, named D1, D2 and D3. The former one is interpreted as related to a low angle tangential tectonics which mass transport is southward. D2 event is marked by the development of close/isoclinal folds with a N-S oriented axis. Refolding patterns generated by F1 and F2 superposition are found in several places. The latest event (D3) corresponds to a transcurrent tectonics, which led to development of mega-folds and several shear zones, under a transpressional regime. The mapped shear zones are Humberto Monte (ZCHM), Poço Cercado (ZCPC) and Forquilha (ZCF). Digital image processing of enhanced Landsat 7-ETM+ satellite images, combined with field data, demonstrate that these penetrative structures are associated with positive and negative geomorphologic patterns, distributed in linear and curvilinear arrangements with tonal banding, corresponding to the ductile fabric and to crests. Diverse color composites were tested and RGB-531 and RGB-752 provided the best results for lineament analysis of the most prominent shear zones. Spatial filtering techniques (3x3 and 5x5 filters) were also used and the application of Prewitt filters generated the best products. The integrated analysis of morphological and textural aspects from filtered images, variation of tonalities related to the distribution of geologic units in color composites and the superposition over a digital elevation model, contributed to a characterization of the structural framework of the study area. Kinematic compatibility of ZCHM, ZCPC, ZCF shear zones, as well as Sobral-Pedro II (ZCSPII) shear zone, situated to the west of the study area, was one of the goal of this work. Two of these shear zones (ZCHM, ZCPC) display sinistral movements, while the others (ZCSPII, ZCF) exhibit dextral kinematics. 40Ar/39Ar ages obtained in this thesis for ZCSPII and ZCPC, associated with other 40Ar/39Ar data of adjacent areas, indicate that all these shear zones are related to Brasiliano orogeny. The trend of the structures, the opposite shear senses and the similar metamorphic conditions are fitted in a model based on the development of conjugate shear zones in an unconfined transpression area. A WNW-ESE bulk shortening direction is infered. The geometry and kinematic of the studied structures suggest that shortening was largely accommodated by lateral extrusion, with only minor amounts of vertical stretch