986 resultados para International image


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Standard Uptake Value (SUV) is a measurement of the uptake in a tumour normalized on the basis of a distribution volume and is used to quantify 18F-Fluorodeoxiglucose (FDG) uptake in tumors, such as primary lung tumor. Several sources of error can affect its accuracy. Normalization can be based on body weight, body surface area (BSA) and lean body mass (LBM). The aim of this study is to compare the influence of 3 normalization volumes in the calculation of SUV: body weight (SUVW), BSA (SUVBSA) and LBM (SUVLBM), with and without glucose correction, in patients with known primary lung tumor. The correlation between SUV and weight, height, blood glucose level, injected activity and time between injection and image acquisition is evaluated. Methods: Sample included 30 subjects (8 female and 22 male) with primary lung tumor, with clinical indication for 18F-FDG Positron Emission Tomography (PET). Images were acquired on a Siemens Biography according to the department’s protocol. Maximum pixel SUVW was obtained for abnormal uptake focus through semiautomatic VOI with Quantification 3D isocontour (threshold 2.5). The concentration of radioactivity (kBq/ml) was obtained from SUVW, SUVBSA, SUVLBM and the glucose corrected SUV were mathematically obtained. Results: Statistically significant differences between SUVW, SUVBSA and SUVLBM and between SUVWgluc, SUVBSAgluc and SUVLBMgluc were observed (p=0.000<0.05). The blood glucose level showed significant positive correlations with SUVW (r=0.371; p=0.043) and SUVLBM (r=0.389; p=0.034). SUVBSA showed independence of variations with the blood glucose level. Conclusion: The measurement of a radiopharmaceutical tumor uptake normalized on the basis of different distribution volumes is still variable. Further investigation on this subject is recommended.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports on the design and development of an Android-based context-aware system to support Erasmus students during their mobility in Porto. It enables: (i) guest users to create, rate and store personal points of interest (POI) in a private, local on board database; and (ii) authenticated users to upload and share POI as well as get and rate recommended POI from the shared central database. The system is a distributed client / server application. The server interacts with a central database that maintains the user profiles and the shared POI organized by category and rating. The Android GUI application works both as a standalone application and as a client module. In standalone mode, guest users have access to generic info, a map-based interface and a local database to store and retrieve personal POI. Upon successful authentication, users can, additionally, share POI as well as get and rate recommendations sorted by category, rating and distance-to-user.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho de Projeto apresentado ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Marketing Digital, sob orientação do Mestre António da Silva Vieira

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurements in civil engineering load tests usually require considerable time and complex procedures. Therefore, measurements are usually constrained by the number of sensors resulting in a restricted monitored area. Image processing analysis is an alternative way that enables the measurement of the complete area of interest with a simple and effective setup. In this article photo sequences taken during load displacement tests were captured by a digital camera and processed with image correlation algorithms. Three different image processing algorithms were used with real images taken from tests using specimens of PVC and Plexiglas. The data obtained from the image processing algorithms were also compared with the data from physical sensors. A complete displacement and strain map were obtained. Results show that the accuracy of the measurements obtained by photogrammetry is equivalent to that from the physical sensors but with much less equipment and fewer setup requirements. © 2015Computer-Aided Civil and Infrastructure Engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução – A pesquisa de informação realizada pelos estudantes de ensino superior em recursos eletrónicos não corresponde necessariamente ao domínio de competências de pesquisa, análise, avaliação, seleção e bom uso da informação recuperada. O conceito de literacia da informação ganha pertinência e destaque, na medida em que abarca competências que permitem reconhecer quando é necessária a informação e de atuar de forma eficiente e efetiva na sua obtenção e utilização. Objetivo – A meta da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL) foi a formação em competências de literacia da informação, fora da ESTeSL, de estudantes, professores e investigadores. Métodos – A formação foi integrada em projetos nacionais e internacionais, dependendo dos públicos-alvo, das temáticas, dos conteúdos, da carga horária e da solicitação da instituição parceira. A Fundação Calouste Gulbenkian foi o promotor financeiro privilegiado. Resultados – Decorreram várias intervenções em território nacional e internacional. Em 2010, em Angola, no Instituto Médio de Saúde do Bengo, formação de 10 bibliotecários sobre a construção e a gestão de uma biblioteca de saúde e introdução à literacia da informação (35h). Em 2014, decorrente do ERASMUS Intensive Programme, o OPTIMAX (Radiation Dose and Image Quality Optimisation in Medical Imaging) para 40 professores e estudantes de radiologia (oriundos de Portugal, Reino Unido, Noruega, Países Baixos e Suíça) sobre metodologia e pesquisa de informação na MEDLINE e na Web of Science e sobre o Mendeley, enquanto gestor de referências (4h). Os trabalhos finais deste curso foram publicados em formato de ebook (http://usir.salford.ac.uk/34439/1/Final%20complete%20version.pdf), cuja revisão editorial foi da responsabilidade dos bibliotecários. Ao longo de 2014, na Escola Superior de Educação, Escola Superior de Dança, Instituto Politécnico de Setúbal e Faculdade de Medicina de Lisboa e, ao longo de 2015, na Universidade Aberta, Escola Superior de Comunicação Social, Instituto Egas Moniz, Faculdade de Letras de Lisboa e Centro de Linguística da Universidade de Lisboa foram desenhados conteúdos sobre o uso do ZOTERO e do Mendeley para a gestão de referências bibliográficas e sobre uma nova forma de fazer investigação. Cada uma destas sessões (2,5h) envolveu cerca de 25 estudantes finalistas, mestrandos e professores. Em 2015, em Moçambique, no Instituto Superior de Ciências da Saúde, decorreu a formação de 5 bibliotecários e 46 estudantes e professores (70h). Os conteúdos ministrados foram: 1) gestão e organização de uma biblioteca de saúde (para bibliotecários); 2) literacia da informação: pesquisa de informação na MEDLINE, SciELO e RCAAP, gestores de referências e como evitar o plágio (para bibliotecários e estudantes finalistas de radiologia). A carga horária destinada aos estudantes incluiu a tutoria das monografias de licenciatura, em colaboração com mais duas professoras do projeto. Para 2016 está agendada formação noutras instituições de ensino superior nacionais. Perspetiva-se, ainda, formação similar em Timor-Leste, cujos conteúdos, datas e carga horária estão por agendar. Conclusões – Destas iniciativas beneficia a instituição (pela visibilidade), os bibliotecários (pelo evidenciar de competências) e os estudantes, professores e investigadores (pelo ganho de novas competências e pela autonomia adquirida). O projeto de literacia da informação da ESTeSL tem contribuído de forma efetiva para a construção e para a produção de conhecimento no meio académico, nacional e internacional, sendo a biblioteca o parceiro privilegiado nesta cultura de colaboração.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes an FPGA-based architecture for onboard hyperspectral unmixing. This method based on the Vertex Component Analysis (VCA) has several advantages, namely it is unsupervised, fully automatic, and it works without dimensionality reduction (DR) pre-processing step. The architecture has been designed for a low cost Xilinx Zynq board with a Zynq-7020 SoC FPGA based on the Artix-7 FPGA programmable logic and tested using real hyperspectral datasets. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low cost embedded systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação de Mestrado Apresentada ao Instituto de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Empreendedorismo e Internacionalização, sob orientação do Mestre Adalmiro Álvaro Malheiro de Castro Andrade Pereira

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation presented to obtain the degree of Doctor of Philosophy in Electrical Engineering, speciality on Perceptional Systems, by the Universidade Nova de Lisboa, Faculty of Sciences and Technology

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given an hyperspectral image, the determination of the number of endmembers and the subspace where they live without any prior knowledge is crucial to the success of hyperspectral image analysis. This paper introduces a new minimum mean squared error based approach to infer the signal subspace in hyperspectral imagery. The method, termed hyperspectral signal identification by minimum error (HySime), is eigendecomposition based and it does not depend on any tuning parameters. It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new toolbox for hyperspectral imagery, developed under the MATLAB environment. This toolbox provides easy access to different supervised and unsupervised classification methods. This new application is also versatile and fully dynamic since the user can embody their own methods, that can be reused and shared. This toolbox, while extends the potentiality of MATLAB environment, it also provides a user-friendly platform to assess the results of different methodologies. In this paper it is also presented, under the new application, a study of several different supervised and unsupervised classification methods on real hyperspectral data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introdução: A prematuridade constitui um fator de risco para a ocorrência de lesões ao nível do sistema nervoso central, sendo que uma idade gestacional inferior a 36 semanas potencia esse mesmo risco, nomeadamente para a paralisia cerebral (PC) do tipo diplegia espástica. A sequência de movimento de sentado para de pé (SPP), sendo uma das aprendizagens motoras que exige um controlo postural (CP) ao nível da tibiotársica, parece ser uma tarefa funcional frequentemente comprometida em crianças prematuras com e sem PC. Objetivo(s): Descrever o comportamento dos músculos da tibiotársica, tibial anterior (TA) e solear (SOL), no que diz respeito ao timing de ativação, magnitude e co-ativação muscular durante a fase I e início da fase II na sequência de movimento de SPP realizada por cinco crianças prematuras com PC do tipo diplegia espástica e cinco crianças prematuras sem diagnóstico de alteração neuromotoras, sendo as primeiras sujeitas a um programa de intervenção baseado nos princípios do conceito de Bobath – Tratamento do Neurodesenvolvimento (TND). Métodos: Foram avaliadas 10 crianças prematuras, cinco com PC e cinco sem diagnóstico de alterações neuromotoras, tendo-se recorrido à eletromiografia de superfície para registar parâmetros musculares, nomeadamente timings, magnitudes e valores de co-ativação dos músculos TA e SOL, associados à fase I e inico da fase II da sequência de movimento de SPP. Procedeu-se ao registo de imagem de modo a facilitar a avaliação dos componentes de movimento associados a esta tarefa. Estes procedimentos foram realizados num único momento, no caso das crianças sem diagnóstico de alterações neuromotoras e em dois momentos, antes e após a aplicação de um programa de intervenção segundo o Conceito de Bobath – TND no caso das crianças com PC. A estas foi ainda aplicado o Teste da Medida das Funções Motoras (TMFM–88) e a Classificação Internacional da Funcionalidade Incapacidade e Saúde – crianças e jovens (CIF-CJ). Resultados: Através da eletromiografia constatou-se que ambos os grupos apresentaram timings de ativação afastados da janela temporal considerada como ajustes posturais antecipatórios (APAs), níveis elevados de co-ativação, em alguns casos com inversão na ordem de recrutamento muscular o que foi possível modificar nas crianças com PC após o período de intervenção. Nestas, verificou-se ainda que, a sequência de movimento de SPP foi realizada com menor número de compensações e com melhor relação entre estruturas proximais e distais compatível com o aumento do score final do TMFM-88 e modificação positiva nos itens de atividade e participação da CIF-CJ. Conclusão: As crianças prematuras com e sem PC apresentaram alterações no CP da tibiotársica e níveis elevados de co-ativação muscular. Após o período de intervenção as crianças com PC apresentaram modificações positivas no timing e co-ativação muscular, com impacto funcional evidenciado no aumento do score final da TMFM-88 e modificações positivas na CIF-CJ.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Long-term international assignments’ increase requires more attention being paid for the preparation of these foreign assignments, especially on the recruitment and selection process of expatriates. This article explores how the recruitment and selection process of expatriates is developed in Portuguese companies, examining the main criteria on recruitment and selection of expatriates’ decision to send international assignments. The paper is based on qualitative case studies of companies located in Portugal. The data were collected through semi-structured interviews of 42 expatriates and 18 organisational representatives as well from nine Portuguese companies. The findings show that the most important criteria are: (1) trust from managers, (2) years in service, (3) previous technical and language competences, (4) organisational knowledge and, (5) availability. Based on the findings, the article discusses in detail the main theoretical and managerial implications. Suggestions for further research are also presented.