950 resultados para Online services using open-source NLP tools
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Civil na Área de Especialização de Vias de Comunicação e Transportes
Resumo:
Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom. Methods: Images were acquired by varying a set of parameters: positions (antero-posterior (AP), posteroanterior (PA) and lateral), kilo-voltage peak (kVp) (66-90), source-to-image distance (SID) (150 to 200cm), broad focus and the use of a grid (grid in/out) to analyse the impact on E and image quality (IQ). IQ was analysed applying two approaches: objective [contrast-to-noise-ratio/(CNR] and perceptual, using 5 observers. Monte-Carlo modelling was used for dose estimation. Cohen’s Kappa coefficient was used to calculate inter-observer-variability. The angle was measured using Cobb’s method on lateral projections under different imaging conditions. Results: PA promoted the lowest effective dose (0.013 mSv) compared to AP (0.048 mSv) and lateral (0.025 mSv). The exposure parameters that allowed lower dose were 200cm SID, 90 kVp, broad focus and grid out for paediatrics using an Agfa CR system. Thirty-seven images were assessed for IQ and thirty-two were classified adequate. Cobb angle measurements varied between 16°±2.9 and 19.9°±0.9. Conclusion: Cobb angle measurements can be performed using the lowest dose with a low contrast-tonoise ratio. The variation on measurements for this was ±2.9° and this is within the range of acceptable clinical error without impact on clinical diagnosis. Further work is recommended on improvement to the sample size and a more robust perceptual IQ assessment protocol for observers.
Resumo:
This paper reviews the literature for lowering of dose to paediatric patients through use of exposure factors and additional filtration. Dose reference levels set by The International Commission on Radiological Protection (ICRP) will be considered. Guidance was put in place in 1996 requires updating to come into line with modern imaging equipment. There is a wide range of literature that specifies that grids should not be used on paediatric patients. Although much of the literature advocates additional filtration, contrasting views on the relative benefits of using aluminium or copper filtration, and their effects on dose reduction and image quality can vary. Changing kVp and mAs has an effect on the dose to the patient and image quality. Collimation protects adjacent structures whilst reducing scattered radiation.
Resumo:
Purpose: To determine whether using different combinations of kVp and mAs with additional filtration can reduce the effective dose to a paediatric phantom whilst maintaining diagnostic image quality. Methods: 27 images of a paediatric AP pelvis phantom were acquired with different kVp, mAs and additional copper filtration. Images were displayed on quality controlled monitors with dimmed lighting. Ten diagnostic radiographers (5 students and 5 experienced radiographers) had eye tests to assess visual acuity before rating the images. Each image was rated for visual image quality against a reference image using 2 alternative forced choice software using a 5-point Likert scale. Physical measures (SNR and CNR) were also taken to assess image quality. Results: Of the 27 images rated, 13 of them were of acceptable image quality and had a dose lower than the image with standard acquisition parameters. Two were produced without filtration, 6 with 0.1mm and 5 with 0.2mm copper filtration. Statistical analysis found that the inter-rater and intra-rater reliability was high. Discussion: It is possible to obtain an image of acceptable image quality with a dose that is lower than published guidelines. There are some areas of the study that could be improved. These include using a wider range of kVp and mAs to give an exact set of parameters to use. Conclusion: Additional filtration has been identified as amajor tool for reducing effective dose whilst maintaining acceptable image quality in a 5 year old phantom.
Resumo:
IEEE International Symposium on Circuits and Systems, pp. 724 – 727, Seattle, EUA
Resumo:
A antropologia forense é uma disciplina das ciências forenses que trata da análise de restos cadavéricos humanos para fins legais. Uma das suas aplicações mais populares é a identificação forense que consiste em determinar o perfil biológico (idade, sexo, ancestralidade e estatura) de um indivíduo. No entanto, este processo muitas vezes é dificultado quando o corpo se encontra em avançado estado de decomposição apenas existindo restos esqueléticos. Neste caso, áreas médicas comummente utilizadas na identificação de cadáveres, como a patologia, tem de ser descartadas e surge a necessidade de aplicar outras técnicas. Neste contexto, muitos métodos antropométricos são propostos de forma a caracterizar uma pessoa através do seu esqueleto. Contudo, constata-se que a maioria dos procedimentos sugeridos é baseada em equipamentos básicos de medição, não usufruindo da tecnologia contemporânea. Assim, em parceria com a Delegação Norte do NMLCF, I. P., esta Tese teve na sua génese a criação de um sistema computacional baseado em imagens de Tomografia Computorizada (TC) de ossadas que, através de ferramentas open source, permita a realização de identificação forense. O trabalho apresentado baseia-se no processo de gestão de informação, aquisição, processamento e visualização de imagens TC. No decorrer da realização da presente Tese foi desenvolvida uma base de dados que permite organizar a informação de cada ossada e foram implementados algoritmos que levam a uma extracção de características muito mais vasta que a efetuada manualmente com os equipamentos de medição clássicos. O resultado final deste estudo consistiu num conjunto de técnicas que poderão ser englobadas num sistema computacional de identificação forense e deste modo criar uma aplicação com vantagens tecnológicas evidentes.
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Software tools in education became popular since the widespread of personal computers. Engineering courses lead the way in this development and these tools became almost a standard. Engineering graduates are familiar with numerical analysis tools but also with simulators (e.g. electronic circuits), computer assisted design tools and others, depending on the degree. One of the main problems with these tools is when and how to start use them so that they can be beneficial to students and not mere substitutes for potentially difficult calculations or design. In this paper a software tool to be used by first year students in electronics/electricity courses is presented. The growing acknowledgement and acceptance of open source software lead to the choice of an open source software tool – Scilab, which is a numerical analysis tool – to develop a toolbox. The toolbox was developed to be used as standalone or integrated in an e-learning platform. The e-learning platform used was Moodle. The first approach was to assess the mathematical skills necessary to solve all the problems related to electronics and electricity courses. Analysing the existing circuit simulators software tools, it is clear that even though they are very helpful by showing the end result they are not so effective in the process of the students studying and self learning since they show results but not intermediate steps which are crucial in problems that involve derivatives or integrals. Also, they are not very effective in obtaining graphical results that could be used to elaborate reports and for an overall better comprehension of the results. The developed tool was based on the numerical analysis software Scilab and is a toolbox that gives their users the opportunity to obtain the end results of a circuit analysis but also the expressions obtained when derivative and integrals calculations, plot signals, obtain vector diagrams, etc. The toolbox runs entirely in the Moodle web platform and provides the same results as the standalone application. The students can use the toolbox through the web platform (in computers where they don't have installation privileges) or in their personal computers by installing both the Scilab software and the toolbox. This approach was designed for first year students from all engineering degrees that have electronics/electricity courses in their curricula.
Resumo:
Uma nova área tecnológica está em crescente desenvolvimento. Esta área, denominada de internet das coisas, surge na necessidade de interligar vários objetos para uma melhoria a nível de serviços ou necessidades por parte dos utilizadores. Esta dissertação concentra-se numa área específica da tecnologia internet das coisas que é a sensorização. Esta rede de sensorização é implementada pelo projeto europeu denominado de Future Cities [1] onde se cria uma infraestrutura de investigação e validação de projetos e serviços inteligentes na cidade do Porto. O trabalho realizado nesta dissertação insere-se numa das plataformas existentes nessa rede de sensorização: a plataforma de sensores ambientais intitulada de UrbanSense. Estes sensores ambientais que estão incorporados em Data Collect Unit (DCU), também denominados por nós, medem variáveis ambientais tais como a temperatura, humidade, ozono e monóxido de carbono. No entanto, os nós têm recursos limitados em termos de energia, processamento e memória. Apesar das grandes evoluções a nível de armazenamento e de processamento, a nível energético, nomeadamente nas baterias, não existe ainda uma evolução tão notável, limitando a sua operacionalidade [2]. Esta tese foca-se, essencialmente, na melhoria do desempenho energético da rede de sensores UrbanSense. A principal contribuição é uma adaptação do protocolo de redes Ad Hoc OLSR (Optimized Link State Routing Protocol) para ser usado por nós alimentados a energia renovável, de forma a aumentar a vida útil dos nós da rede de sensorização. Com esta contribuição é possível obter um maior número de dados durante períodos de tempo mais longos, aproximadamente 10 horas relativamente às 7 horas anteriores, resultando numa maior recolha e envio dos mesmos com uma taxa superior, cerca de 500 KB/s. Existindo deste modo uma aproximação analítica dos vários parâmetros existentes na rede de sensorização. Contudo, o aumento do tempo de vida útil dos nós sensores com recurso à energia renovável, nomeadamente, energia solar, incrementa o seu peso e tamanho que limita a sua mobilidade. Com o referido acréscimo a determinar e a limitar a sua mobilidade exigindo, por isso, um planeamento prévio da sua localização. Numa primeira fase do trabalho analisou-se o consumo da DCU, visto serem estes a base na infraestrutura e comunicando entre si por WiFi ou 3G. Após uma análise dos protocolos de routing com iv suporte para parametrização energética, a escolha recaiu sobre o protocolo OLSR devido à maturidade e compatibilidade com o sistema atual da DCU, pois apesar de existirem outros protocolos, a implementação dos mesmos, não se encontram disponível como software aberto. Para a validação do trabalho realizado na presente dissertação, é realizado um ensaio prévio sem a energia renovável, para permitir caracterização de limitações do sistema. Com este ensaio, tornou-se possível verificar a compatibilidade entre os vários materiais e ajustamento de estratégias. Num segundo teste de validação é concretizado um ensaio real do sistema com 4 nós a comunicar, usando o protocolo com eficiência energética. O protocolo é avaliado em termos de aumento do tempo de vida útil do nó e da taxa de transferência. O desenvolvimento da análise e da adaptação do protocolo de rede Ad Hoc oferece uma maior longevidade em termos de tempo de vida útil, comparando ao que existe durante o processamento de envio de dados. Apesar do tempo de longevidade ser inferior, quando o parâmetro energético se encontra por omissão com o fator 3, a realização da adaptação do sistema conforme a energia, oferece uma taxa de transferência maior num período mais longo. Este é um fator favorável para a abertura de novos serviços de envio de dados em tempo real ou envio de ficheiros com um tamanho mais elevado.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Stratigraphic Columns (SC) are the most useful and common ways to represent the eld descriptions (e.g., grain size, thickness of rock packages, and fossil and lithological components) of rock sequences and well logs. In these representations the width of SC vary according to the grain size (i.e., the wider the strata, the coarser the rocks (Miall 1990; Tucker 2011)), and the thickness of each layer is represented at the vertical axis of the diagram. Typically these representations are drawn 'manually' using vector graphic editors (e.g., Adobe Illustrator®, CorelDRAW®, Inskape). Nowadays there are various software which automatically plot SCs, but there are not versatile open-source tools and it is very di cult to both store and analyse stratigraphic information. This document presents Stratigraphic Data Analysis in R (SDAR), an analytical package1 designed for both plotting and facilitate the analysis of Stratigraphic Data in R (R Core Team 2014). SDAR, uses simple stratigraphic data and takes advantage of the exible plotting tools available in R to produce detailed SCs. The main bene ts of SDAR are: (i) used to generate accurate and complete SC plot including multiple features (e.g., sedimentary structures, samples, fossil content, color, structural data, contacts between beds), (ii) developed in a free software environment for statistical computing and graphics, (iii) run on a wide variety of platforms (i.e., UNIX, Windows, and MacOS), (iv) both plotting and analysing functions can be executed directly on R's command-line interface (CLI), consequently this feature enables users to integrate SDAR's functions with several others add-on packages available for R from The Comprehensive R Archive Network (CRAN).
Resumo:
The usual high cost of commercial codes, and some technical limitations, clearly limits the employment of numerical modelling tools in both industry and academia. Consequently, the number of companies that use numerical code is limited and there a lot of effort put on the development and maintenance of in-house academic based codes. Having in mind the potential of using numerical modelling tools as a design aid, of both products and processes, different research teams have been contributing to the development of open source codes/libraries. In this framework, any individual can take advantage of the available code capabilities and/or implement additional features based on his specific needs. These type of codes are usually developed by large communities, which provide improvements and new features in their specific fields of research, thus increasing significantly the code development process. Among others, OpenFOAM® multi-physics computational library, developed by a very large and dynamic community, nowadays comprises several features usually only available in their commercial counterparts; e.g. dynamic meshes, large diversity of complex physical models, parallelization, multiphase models, to name just a few. This computational library is developed in C++ and makes use of most of all language capabilities to facilitate the implementation of new functionalities. Concerning the field of computational rheology, OpenFOAM® solvers were recently developed to deal with the most relevant differential viscoelastic rheological models, and stabilization techniques are currently being verified. This work describes the implementation of a new solver in OpenFOAM® library, able to cope with integral viscoelastic models based on the deformation field method. The implemented solver is verified through the comparison of the predicted results with analytical solutions, results published in the literature and by using the Method of Manufactured Solutions.
Resumo:
The usual high cost of commercial codes, and some technical limitations, clearly limits the employment of numerical modelling tools in both industry and academia. Consequently, the number of companies that use numerical code is limited and there a lot of effort put on the development and maintenance of in-house academic based codes . Having in mind the potential of using numerical modelling tools as a design aid, of both products and processes, different research teams have been contributing to the development of open source codes/libraries. In this framework, any individual can take advantage of the available code capabilities and/or implement additional features based on his specific needs. These type of codes are usually developed by large communities, which provide improvements and new features in their specific fields of research, thus increasing significantly the code development process. Among others, OpenFOAM® multi-physics computational library, developed by a very large and dynamic community, nowadays comprises several features usually only available in their commercial counterparts; e.g. dynamic meshes, large diversity of complex physical models, parallelization, multiphase models, to name just a few. This computational library is developed in C++ and makes use of most of all language capabilities to facilitate the implementation of new functionalities. Concerning the field of computational rheology, OpenFOAM® solvers were recently developed to deal with the most relevant differential viscoelastic rheological models, and stabilization techniques are currently being verified. This work describes the implementation of a new solver in OpenFOAM® library, able to cope with integral viscoelastic models based on the deformation field method. The implemented solver is verified through the comparison of the predicted results with analytical solutions, results published in the literature and by using the Method of Manufactured Solutions