963 resultados para Images Digital Processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Ambiente, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A dissertação em apreço resultou da necessidade em otimizar os recursos técnicos, mas sobretudo humanos, afetos às verificações de instrumentos de medição, no âmbito do Controlo Metrológico Legal. Estas verificações, realizadas nos termos do cumprimento das competências outrora atribuídas à Direção de Serviços da Qualidade da então Direção Regional da Economia do Norte, eram operacionalizadas pela Divisão da Qualidade e Licenciamento, na altura dirigida pelo subscritor da presente tese, nomeadamente no que respeita aos ensaios efetuados, em laboratório, a manómetros analógicos. O objetivo principal do trabalho foi alcançado mediante o desenvolvimento de um automatismo, materializado pela construção de um protótipo, cuja aplicação ao comparador de pressão múltiplo, dantes em utilização, permitiria realizar a leitura da indicação de cada manómetro analógico através de técnicas de processamento de imagem, função esta tradicionalmente efetuada manualmente por um operador especializado. As metodologias de comando, controlo e medição desse automatismo foram realizadas através de um algoritmo implementado no software LabVIEW® da National Intruments, particularmente no que respeita ao referido processamento das imagens adquiridas por uma câmara de vídeo USB. A interface com o hardware foi concretizada recorrendo a um módulo de Aquisição de Dados Multifuncional (DAQ) USB-6212, do mesmo fabricante. Para o posicionamento horizontal e vertical da câmara de vídeo USB, recorreu-se a guias lineares acionadas por motores de passo, sendo que estes dispositivos foram igualmente empregues no acionamento do comparador de pressão. Por último, procedeu-se à aquisição digital da leitura do padrão, recorrendo à respetiva virtualização, bem como a uma aplicação desenvolvida neste projeto, designada appMAN, destinada à gestão global do referido automatismo, nomeadamente no que se refere ao cálculo do resultado da medição, erro e incerteza associada, e emissão dos respetivos documentos comprovativos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of focal epilepsy, the simultaneous combination of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) holds a great promise as a technique by which the hemodynamic correlates of interictal spikes detected on scalp EEG can be identified. The fact that traditional EEG recordings have not been able to overcome the difficulty in correlating the ictal clinical symptoms to the onset in particular areas of the lobes, brings the need of mapping with more precision the epileptogenic cortical regions. On the other hand, fMRI suggested localizations more consistent with the ictal clinical manifestations detected. This study was developed in order to improve the knowledge about the way parameters involved in the physical and mathematical data, produced by the EEG/fMRI technique processing, would influence the final results. The evaluation of the accuracy was made by comparing the BOLD results with: the high resolution EEG maps; the malformative lesions detected in the T1 weighted MR images; and the anatomical localizations of the diagnosed symptomatology of each studied patient. The optimization of the set of parameters used, will provide an important contribution to the diagnosis of epileptogenic focuses, in patients included on an epilepsy surgery evaluation program. The results obtained allowed us to conclude that: by associating the BOLD effect with interictal spikes, the epileptogenic areas are mapped to localizations different from those obtained by the EEG maps representing the electrical potential distribution across the scalp (EEG); there is an important and solid bond between the variation of particular parameters (manipulated during the fMRI data processing) and the optimization of the final results, from which smoothing, deleted volumes, HRF (used to convolve with the activation design), and the shape of the Gamma function can be certainly emphasized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical and Computer Engineering of the Faculdade de Ciências e Tecnologia of Universidade Nova de Lisboa

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Mecânica

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Informática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Breast cancer is the most common cancer among women, being a major public health problem. Worldwide, X-ray mammography is the current gold-standard for medical imaging of breast cancer. However, it has associated some well-known limitations. The false-negative rates, up to 66% in symptomatic women, and the false-positive rates, up to 60%, are a continued source of concern and debate. These drawbacks prompt the development of other imaging techniques for breast cancer detection, in which Digital Breast Tomosynthesis (DBT) is included. DBT is a 3D radiographic technique that reduces the obscuring effect of tissue overlap and appears to address both issues of false-negative and false-positive rates. The 3D images in DBT are only achieved through image reconstruction methods. These methods play an important role in a clinical setting since there is a need to implement a reconstruction process that is both accurate and fast. This dissertation deals with the optimization of iterative algorithms, with parallel computing through an implementation on Graphics Processing Units (GPUs) to make the 3D reconstruction faster using Compute Unified Device Architecture (CUDA). Iterative algorithms have shown to produce the highest quality DBT images, but since they are computationally intensive, their clinical use is currently rejected. These algorithms have the potential to reduce patient dose in DBT scans. A method of integrating CUDA in Interactive Data Language (IDL) is proposed in order to accelerate the DBT image reconstructions. This method has never been attempted before for DBT. In this work the system matrix calculation, the most computationally expensive part of iterative algorithms, is accelerated. A speedup of 1.6 is achieved proving the fact that GPUs can accelerate the IDL implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents the archaeometallurgical study of a group of metallic artefacts found in Moinhos de Golas site, Vila Real (North of Portugal), that can generically be attributed to Proto-history (1st millennium BC, Late Bronze Age and Iron Age). The collection is composed by 35 objects: weapons, ornaments and tools, and others of difficult classification, as rings, bars and one small thin bent sheet. Some of the objects can typologically be attributed to Late Bronze Age, others are of more difficult specific attribution. The archaeometallurgical study involved x-ray digital radiography, elemental analysis by micro-energy dispersive X-ray fluorescence spectrometry and scanning electron microscopy with energy dispersive spectroscopy, microstructural observations by optical microscopy and scanning electron microscopy. The radiographic images revealed structural heterogeneities frequently related with the degradation of some artefacts and the elemental analysis showed that the majority of the artefacts was produced in a binary bronze alloy (Cu-Sn) (73%), being others produced in copper (15%) and three artefacts in brass (Cu-Zn(-Sn-Pb)). Among each type of alloy there’s certain variability in the composition and in the type of inclusions. The microstructural observations revealed that the majority of the artefacts suffered cycles of thermo-mechanical processing after casting. The diversity of metals/alloys identified was a discovery of great interest, specifically due to the presence of brasses. Their presence can be interpreted as importations related to the circulation of exogenous products during the Proto-history and/or to the deposition of materials during different moments at the site, from the transition of Late Bronze Age/Early Iron Age (Orientalizing period) onwards, as during the Roman period.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is one of the first reports of digital microfluidics on paper and the first in which the chip’s circuit was screen printed unto the paper. The use of the screen printing technique, being a low cost and fast method for electrodes deposition, makes the all chip processing much more aligned with the low cost choice of paper as a substrate. Functioning chips were developed that were capable of working at as low as 50 V, performing all the digital microfluidics operations: movement, dispensing, merging and splitting of the droplets. Silver ink electrodes were screen printed unto paper substrates, covered by Parylene-C (through vapor deposition) as dielectric and Teflon AF 1600 (through spin coating) as hydrophobic layer. The morphology of different paper substrates, silver inks (with different annealing conditions) and Parylene deposition conditions were studied by optical microscopy, AFM, SEM and 3D profilometry. Resolution tests for the printing process and electrical characterization of the silver electrodes were also made. As a showcase of the applications potential of these chips as a biosensing device, a colorimetric peroxidase detection test was successfully done on chip, using 200 nL to 350 nL droplets dispensed from 1 μL drops.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation analyzes the possibilities of utilizing speech-processing technologies to transform the user experience of ActivoBank’s customers while using remote banking solutions. The technologies are examined through different criteria to determine if they support the bank’s goals and strategy and whether they should be incorporated in the bank’s offering. These criteria include the alignment with ActivoBank’s values, the suitability of the technology providers, the benefits these technologies entail, potential risks, appeal to the customers and impact on customer satisfaction. The analysis suggests that ActivoBank might not be in a position to adopt these technologies at this point in time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polymer based scintillator composites have been produced by combining polystyrene (PS) and Gd2O3:Eu3+ scintillator nanoparticles. Polystyrene has been used since it is a flexible and stable binder matrix, resistant to thermal and light deterioration and with suitable optical properties. Gd2O3:Eu3+ has been selected as scintillator material due to its wide band gap, high density and visible light yield. The optical, thermal and electrical characteristics of the composites were studied as a function of filler content, together with their performance as scintillator material. Additionally 1wt.% of 2,5 dipheniloxazol (PPO) and 0.01wt.% of (1,4-bis(2-(5-phenioxazolil))-benzol (POPOP) were introduced in the polymer matrix in order to strongly improve light yield, i.e. the measured intensity of the output visible radiation, under X-ray irradiation. Whereas increasing scintillator filler concentration (from 0.25wt.% to 7.5wt.%) increases scintillator light yield, decreases the optical transparency of the composite. The addition of PPO and POPOP, strongly increased the overall 2 transduction performance of the composite due to specific absorption and re-emission processes. It is thus shown that Gd2O3:Eu3+/PPO/POPOP/PS composites in 0.25 wt.% of scintillator content with fluorescence molecules is suitable for the development of innovate large area X-ray radiation detectors with huge demand from the industries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Land cover changes over time as a result of human activity. Nowadays deforestation may be considered one of the main environmental problems. The objective of this study was to identify and characterize changes to forest cover in Venezuela between 2005-2010. Two maps of deforestation hot spots were generated on the basis of MODIS data, one using digital techniques and the other by means of direct visual interpretation by experts. These maps were validated against Landsat ETM+ images. The accuracy of the map obtained digitally was estimated by means of a confusion matrix. The overall accuracy of the maps obtained digitally was 92.5%. Expert opinions regarding the hot spots permitted the causes of deforestation to be identified. The main processes of deforestation were concentrated to the north of the Orinoco River, where 8.63% of the country's forests are located. In this region, some places registered an average annual forest change rate of between 0.72% and 2.95%, above the forest change rate for the country as a whole (0.61%). The main causes of deforestation for the period evaluated were agricultural and livestock activities (47.9%), particularly family subsistence farming and extensive farming which were carried out in 94% of the identified areas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"Series Title: IFIP - The International Federation for Information Processing, ISSN 1868-4238"

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As digital imaging processing techniques become increasingly used in a broad range of consumer applications, the critical need to evaluate algorithm performance has become recognised by developers as an area of vital importance. With digital image processing algorithms now playing a greater role in security and protection applications, it is of crucial importance that we are able to empirically study their performance. Apart from the field of biometrics little emphasis has been put on algorithm performance evaluation until now and where evaluation has taken place, it has been carried out in a somewhat cumbersome and unsystematic fashion, without any standardised approach. This paper presents a comprehensive testing methodology and framework aimed towards automating the evaluation of image processing algorithms. Ultimately, the test framework aims to shorten the algorithm development life cycle by helping to identify algorithm performance problems quickly and more efficiently.