998 resultados para Processamento Digital de Imagem


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several methods of mobile robot navigation request the mensuration of robot position and orientation in its workspace. In the wheeled mobile robot case, techniques based on odometry allow to determine the robot localization by the integration of incremental displacements of its wheels. However, this technique is subject to errors that accumulate with the distance traveled by the robot, making unfeasible its exclusive use. Other methods are based on the detection of natural or artificial landmarks present in the environment and whose location is known. This technique doesnt generate cumulative errors, but it can request a larger processing time than the methods based on odometry. Thus, many methods make use of both techniques, in such a way that the odometry errors are periodically corrected through mensurations obtained from landmarks. Accordding to this approach, this work proposes a hybrid localization system for wheeled mobile robots in indoor environments based on odometry and natural landmarks. The landmarks are straight lines de.ned by the junctions in environments floor, forming a bi-dimensional grid. The landmark detection from digital images is perfomed through the Hough transform. Heuristics are associated with that transform to allow its application in real time. To reduce the search time of landmarks, we propose to map odometry errors in an area of the captured image that possesses high probability of containing the sought mark

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern wireless systems employ adaptive techniques to provide high throughput while observing desired coverage, Quality of Service (QoS) and capacity. An alternative to further enhance data rate is to apply cognitive radio concepts, where a system is able to exploit unused spectrum on existing licensed bands by sensing the spectrum and opportunistically access unused portions. Techniques like Automatic Modulation Classification (AMC) could help or be vital for such scenarios. Usually, AMC implementations rely on some form of signal pre-processing, which may introduce a high computational cost or make assumptions about the received signal which may not hold (e.g. Gaussianity of noise). This work proposes a new method to perform AMC which uses a similarity measure from the Information Theoretic Learning (ITL) framework, known as correntropy coefficient. It is capable of extracting similarity measurements over a pair of random processes using higher order statistics, yielding in better similarity estimations than by using e.g. correlation coefficient. Experiments carried out by means of computer simulation show that the technique proposed in this paper presents a high rate success in classification of digital modulation, even in the presence of additive white gaussian noise (AWGN)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis studies the use of argumentation as a discursive element in digital media, particularly blogs. We analyzed the Blog "Fatos e Dados" [Facts and Data], created by Petrobras in the context of allegations of corruption that culminated in the installation of a Parliamentary Commission of Inquiry to investigate the company within the Congress. We intend to understand the influence that the discursive elements triggered by argumentation exercise in blogs and about themes scheduling. To this end, we work with notions of argumentation in dialogue with questions of language and discourse from the work of Charaudeau (2006), Citelli (2007), Perelman & Olbrechts-Tyteca (2005), Foucault (2007, 2008a), Bakhtin (2006) and Breton (2003). We also observe our subject from the perspective of social representations, where we seek to clarify concepts such as public image and the use of representations as argumentative elements, considering the work of Moscovici (2007). We also consider reflections about hypertext and the context of cyberculture, with authors such as Levy (1993, 1999, 2003), Castells (2003) and Chartier (1999 and 2002), and issues of discourse analysis, especially in Orlandi (1988, 1989, 1996 and 2001), as well as Foucault (2008b). We analyzed 118 posts published in the first 30 days of existence of the blog "Fatos e Dados" (between 2 June and 1 July 2009), and analyzed in detail the top ten. A corporate blog aims to defend the points of view and public image of the organization, and, therefore, uses elements of social representations to build their arguments. It goes beyond the blog, as the main news criteria, including the posts we reviewed, the credibility of Petrobras as the source of information. In the posts analyzed, the news values of innovation and relevance also arise. The controversy between the Blog and the press resulted from an inadequacy and lack of preparation of media to deal with a corporate blog that was able to explore the characteristics of liberation of the emission pole in cyberculture. The Blog is a discursive manifestation in a concrete historical situation, whose understanding and attribution of meaning takes place from the social relations between subjects that, most of the time, place themselves in discursive and ideological dispute between each other - this dispute also affects the movements of reading and reading production. We conclude that intersubjective relationships that occur in blogs change, in the form of argumentative techniques used, the notions of news criteria, interfering with scheduling of news and organization of information in digital media outlets. It is also clear the influence that the discursive elements triggered by argumentation exercise in digital media, trying to resize and reframe frames of reality conveyed by it in relation to the subject-readers. Blogs have become part of the scenario information with the emergence of the Internet and are able to interfere in a more effective way to organize the scheduling of media from the conscious utilization of argumentative elements in their posts

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays several electronics devices support digital videos. Some examples of these devices are cellphones, digital cameras, video cameras and digital televisions. However, raw videos present a huge amount of data, millions of bits, for their representation as the way they were captured. To store them in its primary form it would be necessary a huge amount of disk space and a huge bandwidth to allow the transmission of these data. The video compression becomes essential to make possible information storage and transmission. Motion Estimation is a technique used in the video coder that explores the temporal redundancy present in video sequences to reduce the amount of data necessary to represent the information. This work presents a hardware architecture of a motion estimation module for high resolution videos according to H.264/AVC standard. The H.264/AVC is the most advanced video coder standard, with several new features which allow it to achieve high compression rates. The architecture presented in this work was developed to provide a high data reuse. The data reuse schema adopted reduces the bandwidth required to execute motion estimation. The motion estimation is the task responsible for the largest share of the gains obtained with the H.264/AVC standard so this module is essential for final video coder performance. This work is included in Rede H.264 project which aims to develop Brazilian technology for Brazilian System of Digital Television

Relevância:

30.00% 30.00%

Publicador:

Resumo:

On the modern Continental Shelf to the north of Rio Grande do Norte state (NE Brazil) is located a paleo-valley, submerged during the last glacial sea-level lowstand, that marks continuation of the most important river of this area (Açu River). Despite the high level of exploration activity of oil industry, there is few information about shallow stratigraphy. Aiming to fill this gap, situated on the Neogene, was worked a marine seismic investigation, the development of a processing flow for high resolution data seismic, and the recognition of the main feature morphology of the study area: the incised valley of the River Açu. The acquisition of shallow seismic data was undertaken in conjunction with the laboratory of Marine Geology/Geophysics and Environmental Monitoring - GGEMMA of Federal University of Rio Grande do Norte UFRN, in SISPLAT project, where the geomorphological structure of the Rio paleovale Açu was the target of the investigation survey. The acquisition of geophysical data has been over the longitudinal and transverse sections, which were subsequently submitted to the processing, hitherto little-used and / or few addressed in the literature, which provided a much higher quality result with the raw data. Once proposed for the flow data was developed and applied to the data of X-Star (acoustic sensor), using available resources of the program ReflexW 4.5 A surface fluvial architecture has been constructed from the bathymetric data and remote sensing image fused and draped over Digital Elevation Models to create three-dimensional (3D) perspective views that are used to analyze the 3D geometry geological features and provide the mapping morphologically defined. The results are expressed in the analysis of seismic sections that extend over the region of the continental shelf and upper slope from mouth of the Açu River to the shelf edge, providing the identification / quantification of geometrical features such as depth, thickness, horizons and units seismic stratigraphyc area, with emphasis has been placed on the palaeoenvironmental interpretation of discordance limit and fill sediment of the incised valley, control by structural elements, and marked by the influence of changes in the sea level. The interpretation of the evolution of this river is worth can bring information to enable more precise descriptions and interpretations, which describes the palaeoenvironmental controls influencing incised valley evolution and preservation to provide a better comprehensive understanding of this reservoir analog system

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJETIVO: Avaliar a posição do supercílio em diferentes idades, utilizando medidas angulares. MÉTODOS: Foram avaliados indivíduos com idade de 4 a 6 anos (Grupo de crianças) e igual ou superior a 50 anos (Grupo de idosos), separados em faixas etárias, avaliando-se a posição do supercílio por meio de imagens digitais, utilizando medidas angulares. As imagens foram tomadas em posição primária do olhar, utilizando filmadora Sony Lithium, e posteriormente transferidas para computador MacIntosh G4 e processadas pelo programa NIH 1,58. Os parâmetros analisados foram: ângulo interno, externo e vertical da cauda do supercílio. As comparações foram entre sexos, faixas etárias e lateralidade. Os resultados obtidos foram submetidos à análise estatística. RESULTADOS: A comparação das medidas angulares mostrou que houve diferença significativa na posição da cauda do supercílio entre os grupos estudados quando comparados dentro do grupo com faixa etária semelhante. Porém, comparando-se crianças e adultos, houve diferença em todos os tipos de ângulos estudados. CONCLUSÕES: A posição do supercílio avaliada por medidas angulares mostrou diferenças entre crianças e idosos, revelando associação positiva com a idade.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJETIVO: Quantificar, usando o sistema de imagem digital, medidas palpebrais antes e após a cirurgia de blefaroplastia superior. MÉTODOS: Foram avaliadas 18 pálpebras de 9 pacientes atendidas no HC da FMB - UNESP, com idade entre 40 a 75 anos, do sexo feminino, portadoras de dermatocálase. Foram obtidas fotografias das pacientes antes e após 60 dias da blefaroplastia da pálpebra superior. As imagens foram transferidas para um computador e analisadas pelo programa Scion Image Frame Grabber. Os parâmetros avaliados foram: a altura da fenda palpebral em posição primária do olhar, altura do sulco palpebral superior e o ângulo palpebral lateral antes e depois de 60 dias da realização da cirurgia de blefaroplastia superior. RESULTADOS: Após a cirurgia, houve aumento da altura da fenda palpebral e do sulco palpebral superior. Contudo, o ângulo palpebral lateral não se alterou. CONCLUSÃO: A posição palpebral se altera após a blefaroplastia e o processamento de imagens digitais possibilita quantificar estas alterações, mensurando os resultados obtidos com a cirurgia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJETIVOS: Avaliar o posicionamento palpebral em portadores de cavidade anoftálmica com e sem prótese ocular externa, utilizando o processamento de imagem digital. MÉTODOS: Dezoito pacientes foram avaliados qualitativa e quantitativamente na Faculdade de Medicina de Botucatu - Universidade Estadual Paulista - UNESP, com e sem a prótese externa. Usando imagens obtidas por filmadora e processadas usando o programa Scion Image, mediu-se a altura do sulco palpebral superior, a altura da fenda palpebral e os ângulos palpebrais dos cantos interno e externo. RESULTADOS: Pseudo-estrabismo e sulco palpebral superior profundo foram as alterações mais freqüentes ao exame externo. Houve diferença significativa em todas as variáveis estudadas, com diminuição da altura do sulco palpebral superior, aumento da área da fenda palpebral e aumento dos ângulos palpebrais interno e externo quando o paciente estava usando a prótese externa. CONCLUSÃO: Todos os pacientes avaliados apresentaram algum tipo de anormalidade órbito-palpebral, o que reflete a dificuldade em se proporcionar ao portador de cavidade anoftálmica um aspecto idêntico ao que existe na órbita normal. O processamento de imagens digitais permitiu avaliação objetiva das dimensões óculo-palpebrais, o que poderá contribuir nas avaliações seqüenciais dos portadores de cavidade anoftálmica.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJETIVO: comparar medidas de tamanhos dentários, suas reprodutibilidades e a aplicação da equação de regressão de Tanaka e Johnston na predição do tamanho dos caninos e pré-molares em modelos de gesso e digital. MÉTODOS: trinta modelos de gesso foram escaneados para obtenção dos modelos digitais. As medidas do comprimento mesiodistal dos dentes foram obtidas com paquímetro digital nos modelos de gesso e nos modelos digitais utilizando o software O3d (Widialabs). A somatória do tamanho dos incisivos inferiores foi utilizada para obter os valores de predição do tamanho dos pré-molares e caninos utilizando equação de regressão, e esses valores foram comparados ao tamanho real dos dentes. Os dados foram analisados estatisticamente, aplicando-se aos resultados o teste de correlação de Pearson, a fórmula de Dahlberg, o teste t pareado e a análise de variância (p < 0,05). RESULTADOS: excelente concordância intraexaminador foi observada nas medidas realizadas em ambos os modelos. O erro aleatório não esteve presente nas medidas obtidas com paquímetro, e o erro sistemático foi mais frequente no modelo digital. A previsão de espaço obtida pela aplicação da equação de regressão foi maior que a somatória dos pré-molares e caninos presentes nos modelos de gesso e nos modelos digitais. CONCLUSÃO: apesar da boa reprodutibilidade das medidas realizadas em ambos os modelos, a maioria das medidas dos modelos digitais foram superiores às do modelos de gesso. O espaço previsto foi superestimado em ambos os modelos e significativamente maior nos modelos digitais.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUÇÃO: A utilização da fotogrametria computadorizada em prol da goniometria, ou vice-versa, na prática clínica ainda necessita de fundamentações consistentes. OBJETIVOS: Os objetivos deste estudo foram: verificar a confiabilidade inter e intraexaminadores avaliadores na quantificação das medidas angulares obtidas a partir da fotogrametria computadorizada e a goniometria e determinar a confiabilidade paralela entre esses dois diferentes instrumentos de avaliação. MATERIAIS E MÉTODOS: 26 voluntários e 4 examinadores foram utilizados no estudo. A coleta foi realizada em 4 etapas sequenciais: demarcação dos pontos anatômicos de referência, mensuração e registro dos valores goniométricos, captação da imagem do voluntário com os marcadores fixados no corpo e avaliação do registro fotográfico no programa ImageJ. RESULTADOS: O goniômetro é um instrumento confiável na maioria das evidências, porém, a confiabilidade das medições depende principalmente da uniformização dos procedimentos. Considerações metodológicas relativas ao estabelecimento de confiabilidade e padronização da colocação dos marcadores se fazem necessárias, de modo a oferecer opções de avaliação ainda mais confiáveis para a prática clínica. CONCLUSÃO: Ambos os instrumentos são confiáveis e aceitáveis, porém, mais evidências ainda são necessárias para suportar a utilização desses instrumentos, pois poucos pesquisadores têm utilizado o mesmo desenho de estudo, e a comparação dos resultados entre eles muitas vezes são difíceis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A análise morfométrica da melanina tecidual pode subsidiar quantitativamente a pesquisa em discromias. Os autores demonstram três técnicas de análise de imagem digital que permitem a identificação dos pixels equivalentes à melanina na epiderme pela coloração de Fontana-Masson, possibilitando o cálculo da sua porcentagem nas diferentes camadas da epiderme, e discutem os principais elementos relacionados à análise e a necessidade de rigorosa padronização do processo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)