995 resultados para Enhanced images
Resumo:
This article reports on a-Si:H-based low-leakage blue-enhanced photodiodes for dual-screen x-ray imaging detectors. Doped nanocrystalline silicon was incorporated in both the n- and p-type regions to reduce absorption losses for light incoming from the top and bottom screens. The photodiode exhibits a dark current density of 900 pA/cm(2) and an external quantum efficiency up to 90% at a reverse bias of 5 V. In the case of illumination through the tailored p-layer, the quantum efficiency of 60% at a 400 nm wavelength is almost double that for the conventional a-Si:H n-i-p photodiode.
Resumo:
The rapid growth in genetics and molecular biology combined with the development of techniques for genetically engineering small animals has led to increased interest in in vivo small animal imaging. Small animal imaging has been applied frequently to the imaging of small animals (mice and rats), which are ubiquitous in modeling human diseases and testing treatments. The use of PET in small animals allows the use of subjects as their own control, reducing the interanimal variability. This allows performing longitudinal studies on the same animal and improves the accuracy of biological models. However, small animal PET still suffers from several limitations. The amounts of radiotracers needed, limited scanner sensitivity, image resolution and image quantification issues, all could clearly benefit from additional research. Because nuclear medicine imaging deals with radioactive decay, the emission of radiation energy through photons and particles alongside with the detection of these quanta and particles in different materials make Monte Carlo method an important simulation tool in both nuclear medicine research and clinical practice. In order to optimize the quantitative use of PET in clinical practice, data- and image-processing methods are also a field of intense interest and development. The evaluation of such methods often relies on the use of simulated data and images since these offer control of the ground truth. Monte Carlo simulations are widely used for PET simulation since they take into account all the random processes involved in PET imaging, from the emission of the positron to the detection of the photons by the detectors. Simulation techniques have become an importance and indispensable complement to a wide range of problems that could not be addressed by experimental or analytical approaches.
Resumo:
Fluorescence confocal microscopy (FCM) is now one of the most important tools in biomedicine research. In fact, it makes it possible to accurately study the dynamic processes occurring inside the cell and its nucleus by following the motion of fluorescent molecules over time. Due to the small amount of acquired radiation and the huge optical and electronics amplification, the FCM images are usually corrupted by a severe type of Poisson noise. This noise may be even more damaging when very low intensity incident radiation is used to avoid phototoxicity. In this paper, a Bayesian algorithm is proposed to remove the Poisson intensity dependent noise corrupting the FCM image sequences. The observations are organized in a 3-D tensor where each plane is one of the images acquired along the time of a cell nucleus using the fluorescence loss in photobleaching (FLIP) technique. The method removes simultaneously the noise by considering different spatial and temporal correlations. This is accomplished by using an anisotropic 3-D filter that may be separately tuned in space and in time dimensions. Tests using synthetic and real data are described and presented to illustrate the application of the algorithm. A comparison with several state-of-the-art algorithms is also presented.
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
Fluorescence confocal microscopy images present a low signal to noise ratio and a time intensity decay due to the so called photoblinking and photobleaching effects. These effects, together with the Poisson multiplicative noise that corrupts the images, make long time biological observation processes very difficult.
Resumo:
A Teia Mundial (Web) foi prevista como uma rede de documentos de hipertexto interligados de forma a criar uma espaço de informação onde humanos e máquinas poderiam comunicar. No entanto, a informação contida na Web tradicional foi/é armazenada de forma não estruturada o que leva a que apenas os humanos a possam consumir convenientemente. Consequentemente, a procura de informações na Web sintáctica é uma tarefa principalmente executada pelos humanos e nesse sentido nem sempre é fácil de concretizar. Neste contexto, tornou-se essencial a evolução para uma Web mais estruturada e mais significativa onde é dado significado bem definido à informação de forma a permitir a cooperação entre humanos e máquinas. Esta Web é usualmente referida como Web Semântica. Além disso, a Web Semântica é totalmente alcançável apenas se os dados de diferentes fontes forem ligados criando assim um repositório de Dados Abertos Ligados (LOD). Com o aparecimento de uma nova Web de Dados (Abertos) Ligados (i.e. a Web Semântica), novas oportunidades e desafios surgiram. Pergunta Resposta (QA) sobre informação semântica é actualmente uma área de investigação activa que tenta tirar vantagens do uso das tecnologias ligadas à Web Semântica para melhorar a tarefa de responder a questões. O principal objectivo do projecto World Search passa por explorar a Web Semântica para criar mecanismos que suportem os utilizadores de domínios de aplicação específicos a responder a questões complexas com base em dados oriundos de diferentes repositórios. No entanto, a avaliação feita ao estado da arte permite concluir que as aplicações existentes não suportam os utilizadores na resposta a questões complexas. Nesse sentido, o trabalho desenvolvido neste documento foca-se em estudar/desenvolver metodologias/processos que permitam ajudar os utilizadores a encontrar respostas exactas/corretas para questões complexas que não podem ser respondidas fazendo uso dos sistemas tradicionais. Tal inclui: (i) Ultrapassar a dificuldade dos utilizadores visionarem o esquema subjacente aos repositórios de conhecimento; (ii) Fazer a ponte entre a linguagem natural expressa pelos utilizadores e a linguagem (formal) entendível pelos repositórios; (iii) Processar e retornar informações relevantes que respondem apropriadamente às questões dos utilizadores. Para esse efeito, são identificadas um conjunto de funcionalidades que são consideradas necessárias para suportar o utilizador na resposta a questões complexas. É também fornecida uma descrição formal dessas funcionalidades. A proposta é materializada num protótipo que implementa as funcionalidades previamente descritas. As experiências realizadas com o protótipo desenvolvido demonstram que os utilizadores efectivamente beneficiam das funcionalidades apresentadas: ▪ Pois estas permitem que os utilizadores naveguem eficientemente sobre os repositórios de informação; ▪ O fosso entre as conceptualizações dos diferentes intervenientes é minimizado; ▪ Os utilizadores conseguem responder a questões complexas que não conseguiam responder com os sistemas tradicionais. Em suma, este documento apresenta uma proposta que comprovadamente permite, de forma orientada pelo utilizador, responder a questões complexas em repositórios semiestruturados.
Resumo:
Introdução – A ressonância magnética (RM) mamária com injeção de contraste é uma ferramenta indispensável para a caracterização de lesões mamárias. Para tal, é fulcral uma boa capacidade de saturação de gordura de forma a delimitar as lesões observadas. Este trabalho tem como objetivo analisar quantitativa e qualitativamente as técnicas de saturação de gordura em RM mamária, nomeadamente as técnicas SPAIR e Dixon. Metodologia – Utilizou-se um equipamento de RM de 1,5T com bobina específica para a mama. Aplicou-se, durante o exame, uma sequência adicional em que a técnica de saturação de gordura utilizada foi a Dixon. Posteriormente foram analisadas as imagens obtidas com as técnicas SPAIR e Dixon e colocadas regiões de interesse (ROI) na lesão, na glândula mamária, no tecido adiposo e no ar. Estes valores de sinal obtidos foram registados e calcularam-se as relações sinal-ruído (SNR), contraste-ruído (CNR) e uniformidade de saturação de gordura para as mesmas estruturas a analisar e nas distintas sequências. Resultados – Verificou-se que a técnica de supressão de gordura Dixon apresenta resultados quantitativos e qualitativos superiores à técnica SPAIR. Conclusões – Recomenda-se a utilização da técnica Dixon para saturação de gordura nas sequências de estudo dinâmico com agente de contraste, especificamente nos estudos de RM mamária.
Resumo:
Conferência - 16th International Symposium on Wireless Personal Multimedia Communications (WPMC)- Jun 24-27, 2013
Resumo:
Mestrado em Radiações Aplicadas às Tecnologias da Saúde - Ramo de especialização: Terapia com Radiações
Resumo:
With progressing CMOS technology miniaturization, the leakage power consumption starts to dominate the dynamic power consumption. The recent technology trends have equipped the modern embedded processors with the several sleep states and reduced their overhead (energy/time) of the sleep transition. The dynamic voltage frequency scaling (DVFS) potential to save energy is diminishing due to efficient (low overhead) sleep states and increased static (leakage) power consumption. The state-of-the-art research on static power reduction at system level is based on assumptions that cannot easily be integrated into practical systems. We propose a novel enhanced race-to-halt approach (ERTH) to reduce the overall system energy consumption. The exhaustive simulations demonstrate the effectiveness of our approach showing an improvement of up to 8 % over an existing work.
Resumo:
Drilling of composites plates normally uses traditional techniques but damage risk is high. NDT use is important. Damage in a carbon/epoxy plate is evaluated by enhanced X-rays. Four different drills are used. The images are analysed using Computational Vision techniques. Surface roughness is compared. Results suggest strategies for delamination reduction.
Resumo:
In this paper an automatic classification algorithm is proposed for the diagnosis of the liver steatosis, also known as, fatty liver, from ultrasound images. The features, automatically extracted from the ultrasound images used by the classifier, are basically the ones used by the physicians in the diagnosis of the disease based on visual inspection of the ultrasound images. The main novelty of the method is the utilization of the speckle noise that corrupts the ultrasound images to compute textural features of the liver parenchyma relevant for the diagnosis. The algorithm uses the Bayesian framework to compute a noiseless image, containing anatomic and echogenic information of the liver and a second image containing only the speckle noise used to compute the textural features. The classification results, with the Bayes classifier using manually classified data as ground truth show that the automatic classifier reaches an accuracy of 95% and a 100% of sensitivity.
Resumo:
Dissertation presented at the Faculty of Science and Technology of the New University of Lisbon in fulfillment of the requirements for the Masters degree in Electrical Engineering and Computers
Resumo:
The use of fiber reinforced plastics has increased in the last decades due to their unique properties. Advantages of their use are related with low weight, high strength and stiffness. Drilling of composite plates can be carried out in conventional machinery with some adaptations. However, the presence of typical defects like delamination can affect mechanical properties of produced parts. In this paper delamination influence in bearing stress of drilled hybrid carbon+glass/epoxy quasi-isotropic plates is studied by using image processing and analysis techniques. Results from bearing test show that damage minimization is an important mean to improve mechanical properties of the joint area of the plate. The appropriateness of the image processing and analysis techniques used in the measurement of the damaged area is demonstrated.
Resumo:
On-chip debug (OCD) features are frequently available in modern microprocessors. Their contribution to shorten the time-to-market justifies the industry investment in this area, where a number of competing or complementary proposals are available or under development, e.g. NEXUS, CJTAG, IJTAG. The controllability and observability features provided by OCD infrastructures provide a valuable toolbox that can be used well beyond the debugging arena, improving the return on investment rate by diluting its cost across a wider spectrum of application areas. This paper discusses the use of OCD features for validating fault tolerant architectures, and in particular the efficiency of various fault injection methods provided by enhanced OCD infrastructures. The reference data for our comparative study was captured on a workbench comprising the 32-bit Freescale MPC-565 microprocessor, an iSYSTEM IC3000 debugger (iTracePro version) and the Winidea 2005 debugging package. All enhanced OCD infrastructures were implemented in VHDL and the results were obtained by simulation within the same fault injection environment. The focus of this paper is on the comparative analysis of the experimental results obtained for various OCD configurations and debugging scenarios.