70 resultados para International image


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Anxiety is a common problem in primary care and specialty medical settings. Treating an anxious patient takes more time and adds stress to staff. Unrecognised anxiety may lead to exam repetition, image artifacts and hinder the scan performance. Reducing patient anxiety at the onset is probably the most useful means of minimizing artifactual FDG uptake, both fat brown and skeletal muscle uptake, as well patient movement and claustrophobia. The aim of the study was to examine the effects of information giving on the anxiety levels of patients who are to undergo a PET/CT and whether the patient experience is enhanced with the creation of a guideline. Methodology: Two hundred and thirty two patients were given two questionnaires before and after the procedure to determine their prior knowledge, concerns, expectations and experiences about the study. Verbal information was given by one of the technologists after the completion of the first questionnaire. Results: Our results show that the main causes of anxiety in patients who are having a PET/CT is the fear of the procedure itself, and fear of the results. The patients who suffered from greater anxiety were those who were scanned during the initial stage of a disease. No significant differences were found between the anxiety levels pre procedural and post procedural. Findings with regard to satisfaction show us that the amount of information given before the procedure does not change the anxiety levels and therefore, does not influence patient satisfaction. Conclusions: The performance of a PET/CT scan is an important and statistically generator of anxiety. PET/CT patients are often poorly informed and present with a range of anxieties that may ultimately affect examination quality. The creation of a guideline may reduce the stress of not knowing what will happen, the anxiety created and may increase their satisfaction in the experience of having a PET/CT scan.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Standard Uptake Value (SUV) is a measurement of the uptake in a tumour normalized on the basis of a distribution volume and is used to quantify 18F-Fluorodeoxiglucose (FDG) uptake in tumors, such as primary lung tumor. Several sources of error can affect its accuracy. Normalization can be based on body weight, body surface area (BSA) and lean body mass (LBM). The aim of this study is to compare the influence of 3 normalization volumes in the calculation of SUV: body weight (SUVW), BSA (SUVBSA) and LBM (SUVLBM), with and without glucose correction, in patients with known primary lung tumor. The correlation between SUV and weight, height, blood glucose level, injected activity and time between injection and image acquisition is evaluated. Methods: Sample included 30 subjects (8 female and 22 male) with primary lung tumor, with clinical indication for 18F-FDG Positron Emission Tomography (PET). Images were acquired on a Siemens Biography according to the department’s protocol. Maximum pixel SUVW was obtained for abnormal uptake focus through semiautomatic VOI with Quantification 3D isocontour (threshold 2.5). The concentration of radioactivity (kBq/ml) was obtained from SUVW, SUVBSA, SUVLBM and the glucose corrected SUV were mathematically obtained. Results: Statistically significant differences between SUVW, SUVBSA and SUVLBM and between SUVWgluc, SUVBSAgluc and SUVLBMgluc were observed (p=0.000<0.05). The blood glucose level showed significant positive correlations with SUVW (r=0.371; p=0.043) and SUVLBM (r=0.389; p=0.034). SUVBSA showed independence of variations with the blood glucose level. Conclusion: The measurement of a radiopharmaceutical tumor uptake normalized on the basis of different distribution volumes is still variable. Further investigation on this subject is recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurements in civil engineering load tests usually require considerable time and complex procedures. Therefore, measurements are usually constrained by the number of sensors resulting in a restricted monitored area. Image processing analysis is an alternative way that enables the measurement of the complete area of interest with a simple and effective setup. In this article photo sequences taken during load displacement tests were captured by a digital camera and processed with image correlation algorithms. Three different image processing algorithms were used with real images taken from tests using specimens of PVC and Plexiglas. The data obtained from the image processing algorithms were also compared with the data from physical sensors. A complete displacement and strain map were obtained. Results show that the accuracy of the measurements obtained by photogrammetry is equivalent to that from the physical sensors but with much less equipment and fewer setup requirements. © 2015Computer-Aided Civil and Infrastructure Engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução – A pesquisa de informação realizada pelos estudantes de ensino superior em recursos eletrónicos não corresponde necessariamente ao domínio de competências de pesquisa, análise, avaliação, seleção e bom uso da informação recuperada. O conceito de literacia da informação ganha pertinência e destaque, na medida em que abarca competências que permitem reconhecer quando é necessária a informação e de atuar de forma eficiente e efetiva na sua obtenção e utilização. Objetivo – A meta da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL) foi a formação em competências de literacia da informação, fora da ESTeSL, de estudantes, professores e investigadores. Métodos – A formação foi integrada em projetos nacionais e internacionais, dependendo dos públicos-alvo, das temáticas, dos conteúdos, da carga horária e da solicitação da instituição parceira. A Fundação Calouste Gulbenkian foi o promotor financeiro privilegiado. Resultados – Decorreram várias intervenções em território nacional e internacional. Em 2010, em Angola, no Instituto Médio de Saúde do Bengo, formação de 10 bibliotecários sobre a construção e a gestão de uma biblioteca de saúde e introdução à literacia da informação (35h). Em 2014, decorrente do ERASMUS Intensive Programme, o OPTIMAX (Radiation Dose and Image Quality Optimisation in Medical Imaging) para 40 professores e estudantes de radiologia (oriundos de Portugal, Reino Unido, Noruega, Países Baixos e Suíça) sobre metodologia e pesquisa de informação na MEDLINE e na Web of Science e sobre o Mendeley, enquanto gestor de referências (4h). Os trabalhos finais deste curso foram publicados em formato de ebook (http://usir.salford.ac.uk/34439/1/Final%20complete%20version.pdf), cuja revisão editorial foi da responsabilidade dos bibliotecários. Ao longo de 2014, na Escola Superior de Educação, Escola Superior de Dança, Instituto Politécnico de Setúbal e Faculdade de Medicina de Lisboa e, ao longo de 2015, na Universidade Aberta, Escola Superior de Comunicação Social, Instituto Egas Moniz, Faculdade de Letras de Lisboa e Centro de Linguística da Universidade de Lisboa foram desenhados conteúdos sobre o uso do ZOTERO e do Mendeley para a gestão de referências bibliográficas e sobre uma nova forma de fazer investigação. Cada uma destas sessões (2,5h) envolveu cerca de 25 estudantes finalistas, mestrandos e professores. Em 2015, em Moçambique, no Instituto Superior de Ciências da Saúde, decorreu a formação de 5 bibliotecários e 46 estudantes e professores (70h). Os conteúdos ministrados foram: 1) gestão e organização de uma biblioteca de saúde (para bibliotecários); 2) literacia da informação: pesquisa de informação na MEDLINE, SciELO e RCAAP, gestores de referências e como evitar o plágio (para bibliotecários e estudantes finalistas de radiologia). A carga horária destinada aos estudantes incluiu a tutoria das monografias de licenciatura, em colaboração com mais duas professoras do projeto. Para 2016 está agendada formação noutras instituições de ensino superior nacionais. Perspetiva-se, ainda, formação similar em Timor-Leste, cujos conteúdos, datas e carga horária estão por agendar. Conclusões – Destas iniciativas beneficia a instituição (pela visibilidade), os bibliotecários (pelo evidenciar de competências) e os estudantes, professores e investigadores (pelo ganho de novas competências e pela autonomia adquirida). O projeto de literacia da informação da ESTeSL tem contribuído de forma efetiva para a construção e para a produção de conhecimento no meio académico, nacional e internacional, sendo a biblioteca o parceiro privilegiado nesta cultura de colaboração.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an FPGA-based architecture for onboard hyperspectral unmixing. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral datasets. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new toolbox for hyperspectral imagery, developed under the MATLAB environment. This toolbox provides easy access to different supervised and unsupervised classification methods. This new application is also versatile and fully dynamic since the user can embody their own methods, that can be reused and shared. This toolbox, while extends the potentiality of MATLAB environment, it also provides a user-friendly platform to assess the results of different methodologies. In this paper it is also presented, under the new application, a study of several different supervised and unsupervised classification methods on real hyperspectral data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.