15 resultados para post-processing method

em Repositório Científico do Instituto Politécnico de Lisboa - Portugal


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coronary artery disease (CAD) is currently one of the most prevalent diseases in the world population and calcium deposits in coronary arteries are one direct risk factor. These can be assessed by the calcium score (CS) application, available via a computed tomography (CT) scan, which gives an accurate indication of the development of the disease. However, the ionising radiation applied to patients is high. This study aimed to optimise the protocol acquisition in order to reduce the radiation dose and explain the flow of procedures to quantify CAD. The main differences in the clinical results, when automated or semiautomated post-processing is used, will be shown, and the epidemiology, imaging, risk factors and prognosis of the disease described. The software steps and the values that allow the risk of developingCADto be predicted will be presented. A64-row multidetector CT scan with dual source and two phantoms (pig hearts) were used to demonstrate the advantages and disadvantages of the Agatston method. The tube energy was balanced. Two measurements were obtained in each of the three experimental protocols (64, 128, 256 mAs). Considerable changes appeared between the values of CS relating to the protocol variation. The predefined standard protocol provided the lowest dose of radiation (0.43 mGy). This study found that the variation in the radiation dose between protocols, taking into consideration the dose control systems attached to the CT equipment and image quality, was not sufficient to justify changing the default protocol provided by the manufacturer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Discrete data representations are necessary, or at least convenient, in many machine learning problems. While feature selection (FS) techniques aim at finding relevant subsets of features, the goal of feature discretization (FD) is to find concise (quantized) data representations, adequate for the learning task at hand. In this paper, we propose two incremental methods for FD. The first method belongs to the filter family, in which the quality of the discretization is assessed by a (supervised or unsupervised) relevance criterion. The second method is a wrapper, where discretized features are assessed using a classifier. Both methods can be coupled with any static (unsupervised or supervised) discretization procedure and can be used to perform FS as pre-processing or post-processing stages. The proposed methods attain efficient representations suitable for binary and multi-class problems with different types of data, being competitive with existing methods. Moreover, using well-known FS methods with the features discretized by our techniques leads to better accuracy than with the features discretized by other methods or with the original features. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The surface morphology, structure and composition of human dentin treated with a femtosecond infrared laser (pulse duration 500 fs, wavelength 1030 nm, fluences ranging from 1 to 3 J cm(-2)) was studied by scanning electron microscopy, x-ray diffraction, x-ray photoelectron spectroscopy and Fourier transform infrared spectroscopy. The average dentin ablation threshold under these conditions was 0.6 +/- 0.2 J cm(-2) and the ablation rate achieved in the range 1 to 2 mu m/pulse for an average fluence of 3 J cm(-2). The ablation surfaces present an irregular and rugged appearance, with no significant traces of melting, deformation, cracking or carbonization. The smear layer was entirely removed by the laser treatment. For fluences only slightly higher than the ablation threshold the morphology of the laser-treated surfaces was very similar to the dentin fracture surfaces and the dentinal tubules remained open. For higher fluences, the surface was more porous and the dentin structure was partially concealed by ablation debris and a few resolidified droplets. Independently on the laser processing parameters and laser processing method used no sub-superficial cracking was observed. The dentin constitution and chemical composition was not significantly modified by the laser treatment in the processing parameter range used. In particular, the organic matter is not preferentially removed from the surface and no traces of high temperature phosphates, such as the beta-tricalcium phosphate, were observed. The achieved results are compatible with an electrostatic ablation mechanism. In conclusion, the high beam quality and short pulse duration of the ultrafast laser used should allow the accurate preparation of cavities, with negligible damage of the underlying material.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Introdução – A mamografia é o principal método de diagnóstico por imagem utilizado no rastreio e diagnóstico do cancro da mama, sendo a modalidade de imagem recomendada em vários países da Europa e Estados Unidos para utilização em programas de rastreio. A implementação da tecnologia digital causou alterações na prática da mamografia, nomeadamente a necessidade de adaptar os programas de controlo de qualidade. Objetivos – Caracterizar a tecnologia instalada para mamografia em Portugal e as práticas adotadas na sua utilização pelos profissionais de saúde envolvidos. Concluir sobre o nível de harmonização das práticas em mamografia em Portugal e a conformidade com as recomendações internacionais. Identificar oportunidades para otimização que permitam assegurar a utilização eficaz e segura da tecnologia. Metodologia – Pesquisa e recolha de dados sobre a tecnologia instalada, fornecidos por fontes governamentais, prestadores de serviços de mamografia e indústria. Construção de três questionários, orientados ao perfil do médico radiologista, técnico de radiologia com atividade em mamografia digital e técnico de radiologia coordenador. Os questionários foram aplicados em 65 prestadores de serviços de mamografia selecionados com base em critérios de localização geográfica, tipo de tecnologia instalada e perfil da instituição. Resultados – Foram identificados 441 sistemas para mamografia em Portugal. A tecnologia mais frequente (62%) e vulgarmente conhecida por radiografia computorizada (computed radiography) é constituída por um detector (image plate) de material fotoestimulável inserido numa cassete de suporte e por um sistema de processamento ótico. A maioria destes sistemas (78%) está instalada em prestadores privados. Aproximadamente 12% dos equipamentos instalados são sistemas para radiografia digital direta (Direct Digital Radiography – DDR). Os critérios para seleção dos parâmetros técnicos de exposição variam, observando-se que em 65% das instituições são adotadas as recomendações dos fabricantes do equipamento. As ferramentas de pós-processamento mais usadas pelos médicos radiologistas são o ajuste do contraste e brilho e magnificação total e/ou localizada da imagem. Quinze instituições (em 19) têm implementado um programa de controlo de qualidade. Conclusões – Portugal apresenta um parque de equipamentos heterogéneo que inclui tecnologia obsoleta e tecnologia “topo de gama”. As recomendações/guidelines (europeias ou americanas) não são adotadas formalmente na maioria das instituições como guia para fundamentação das práticas em mamografia, dominando as recomendações dos fabricantes do equipamento. Foram identificadas, pelos técnicos de radiologia e médicos radiologistas, carências de formação especializada, nomeadamente nas temáticas da intervenção mamária, otimização da dose e controlo da qualidade. A maioria dos inquiridos concorda com a necessidade de certificação da prática da mamografia em Portugal e participaria num programa voluntário. ABSTRACT - Introduction – Mammography is the gold standard for screening and imaging diagnosis of breast disease. It is the imaging modality recommended by screening programs in various countries in Europe and the United States. The implementation of the digital technology promoted changes in mammography practice and triggered the need to adjust quality control programs. Aims –Characterize the technology for mammography installed in Portugal. Assess practice in use in mammography and its harmonization and compliance to international guidelines. Identify optimization needs to promote an effective and efficient use of digital mammography to full potential. Methodology – Literature review was performed. Data was collected from official sources (governmental bodies, mammography healthcare providers and medical imaging industry) regarding the number and specifications of mammography equipment installed in Portugal. Three questionnaires targeted at radiologists, breast radiographers and the chief-radiographer were designed for data collection on the technical and clinical practices in mammography. The questionnaires were delivered in a sample of 65 mammography providers selected according to geographical criteria, type of technology and institution profile. Results – Results revealed 441 mammography systems installed in Portugal. The most frequent (62%) technology type are computerized systems (CR) mostly installed in the private sector (78%). 12% are direct radiography systems (DDR). The criteria for selection of the exposure parameters differ between the institutions with the majority (65%) following the recommendations from the manufacturers. The use of available tools for post-processing is limited being the most frequently reported tools used the contrast/ brightness and Zoom or Pan Magnification tools. Fifteen participant institutions (out of 19) have implemented a quality control programme. Conclusions – The technology for mammography in Portugal is heterogeneous and includes both obsolete and state of the art equipment. International guidelines (European or American) are not formally implemented and the manufacturer recommendations are the most frequently used guidance. Education and training needs were identified amongst the healthcare professionals (radiologists and radiographers) with focus in the areas of mammography intervention, patient dose optimization and quality control. The majority of the participants agree with the certification of mammography in Portugal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mestrado em Radiações Aplicadas às Tecnologias da Saúde. Especialização: Ressonância Magnética.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Frame rate upconversion (FRUC) is an important post-processing technique to enhance the visual quality of low frame rate video. A major, recent advance in this area is FRUC based on trilateral filtering which novelty mainly derives from the combination of an edge-based motion estimation block matching criterion with the trilateral filter. However, there is still room for improvement, notably towards reducing the size of the uncovered regions in the initial estimated frame, this means the estimated frame before trilateral filtering. In this context, proposed is an improved motion estimation block matching criterion where a combined luminance and edge error metric is weighted according to the motion vector components, notably to regularise the motion field. Experimental results confirm that significant improvements are achieved for the final interpolated frames, reaching PSNR gains up to 2.73 dB, on average, regarding recent alternative solutions, for video content with varied motion characteristics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Trabalho de Projeto para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de Mestrado para a obtenção de grau de Mestre em Engenharia Eletrotécnica Ramo de Automação e Eletrónica Industrial

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Naturally Occurring Radioactive Materials (NORM) are materials that are found naturally in the environment and contain radioactive isotopes that can cause negative effects on the health of workers who manipulate them. Present in underground work like mining and tunnel construction in granite zones, these materials are difficult to identify and characterize without appropriate equipment for risk evaluation. The assessing methods were exemplified with a case study applied to the handling and processing of phosphoric rock where one found significant amounts of radioactive isotopes and consequently elevated radon concentrations in enclosed spaces containing these materials. © 2015 Taylor & Francis Group, London.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many Hyperspectral imagery applications require a response in real time or near-real time. To meet this requirement this paper proposes a parallel unmixing method developed for graphics processing units (GPU). This method is based on the vertex component analysis (VCA), which is a geometrical based method highly parallelizable. VCA is a very fast and accurate method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Experimental results obtained for simulated and real hyperspectral datasets reveal considerable acceleration factors, up to 24 times.