950 resultados para compliant cryptologic protocols
Resumo:
Objective: Summarize all relevant findings in published literature regarding the potential dose reduction related to image quality using Sinogram-Affirmed Iterative Reconstruction (SAFIRE) compared to Filtered Back Projection (FBP). Background: Computed Tomography (CT) is one of the most used radiographic modalities in clinical practice providing high spatial and contrast resolution. However it also delivers a relatively high radiation dose to the patient. Reconstructing raw-data using Iterative Reconstruction (IR) algorithms has the potential to iteratively reduce image noise while maintaining or improving image quality of low dose standard FBP reconstructions. Nevertheless, long reconstruction times made IR unpractical for clinical use until recently. Siemens Medical developed a new IR algorithm called SAFIRE, which uses up to 5 different strength levels, and poses an alternative to the conventional IR with a significant reconstruction time reduction. Methods: MEDLINE, ScienceDirect and CINAHL databases were used for gathering literature. Eleven articles were included in this review (from 2012 to July 2014). Discussion: This narrative review summarizes the results of eleven articles (using studies on both patients and phantoms) and describes SAFIRE strengths for noise reduction in low dose acquisitions while providing acceptable image quality. Conclusion: Even though the results differ slightly, the literature gathered for this review suggests that the dose in current CT protocols can be reduced at least 50% while maintaining or improving image quality. There is however a lack of literature concerning paediatric population (with increased radiation sensitivity). Further studies should also assess the impact of SAFIRE on diagnostic accuracy.
Resumo:
ABSTRACT OBJECTIVE To describe different approaches to promote adverse drug reaction reporting among health care professionals, determining their cost-effectiveness. METHODS We analyzed and compared several approaches taken by the Northern Pharmacovigilance Centre (Portugal) to promote adverse drug reaction reporting. Approaches were compared regarding the number and relevance of adverse drug reaction reports obtained and costs involved. Costs by report were estimated by adding the initial costs and the running costs of each intervention. These costs were divided by the number of reports obtained with each intervention, to assess its cost-effectiveness. RESULTS All the approaches seem to have increased the number of adverse drug reaction reports. We noted the biggest increase with protocols (321 reports, costing 1.96 € each), followed by first educational approach (265 reports, 20.31 €/report) and by the hyperlink approach (136 reports, 15.59 €/report). Regarding the severity of adverse drug reactions, protocols were the most efficient approach, costing 2.29 €/report, followed by hyperlinks (30.28 €/report, having no running costs). Concerning unexpected adverse drug reactions, the best result was obtained with protocols (5.12 €/report), followed by first educational approach (38.79 €/report). CONCLUSIONS We recommend implementing protocols in other pharmacovigilance centers. They seem to be the most efficient intervention, allowing receiving adverse drug reactions reports at lower costs. The increase applied not only to the total number of reports, but also to the severity, unexpectedness and high degree of causality attributed to the adverse drug reactions. Still, hyperlinks have the advantage of not involving running costs, showing the second best performance in cost per adverse drug reactions report.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores - Área de Especialização de Telecomunicações
Resumo:
O objectivo da tese é demonstrar a adequação do paradigma dos mercados electrónicos baseados em agentes para transaccionar objectos multimédia em função do perfil dos espectadores. Esta dissertação descreve o projecto realizado no âmbito da plataforma de personalização de conteúdos em construção. O domínio de aplicação adoptado foi a personalização dos intervalos publicitários difundidos pelos distribuidores de conteúdos multimédia, i.e., pretende-se gerar em tempo útil o alinhamento de anúncios publicitários que melhor se adeqúe ao perfil de um espectador ou de um grupo de espectadores. O projecto focou-se no estudo e selecção das tecnologias de suporte, na concepção da arquitectura e no desenvolvimento de um protótipo que permitisse realizar diversas experiências nomeadamente com diferentes estratégias e tipos de mercado. A arquitectura proposta para a plataforma consiste num sistema multiagente organizado em três camadas que disponibiliza interfaces do tipo serviço Web com o exterior. A camada de topo é constituída por agentes de interface com o exterior. Na camada intermédia encontram-se os agentes autónomos que modelam as entidades produtoras e consumidoras de componentes multimédia assim como a entidade reguladora do mercado. Estes agentes registam-se num serviço de registo próprio onde especificam os componentes multimédia que pretendem negociar. Na camada inferior realiza-se o mercado que é constituído por agentes delegados dos agentes da camada superior. O lançamento do mercado é efectuado através de uma interface e consiste na escolha do tipo de mercado e no tipo de itens a negociar. Este projecto centrou-se na realização da camada do mercado e da parte da camada intermédia de apoio às actividades de negociação no mercado. A negociação é efectuada em relação ao preço da transmissão do anúncio no intervalo em preenchimento. Foram implementados diferentes perfis de negociação com tácticas, incrementos e limites de variação de preço distintos. Em termos de protocolos de negociação, adoptou-se uma variante do Iterated Contract Net – o Fixed Iterated Contract Net. O protótipo resultante foi testado e depurado com sucesso.
Resumo:
Mestrado em Engenharia Eletrotécnica e de Computadores - Área de Especialização de Telecomunicações
Resumo:
The corner stone of the interoperability of eLearning systems is the standard definition of learning objects. Nevertheless, for some domains this standard is insufficient to fully describe all the assets, especially when they are used as input for other eLearning services. On the other hand, a standard definition of learning objects in not enough to ensure interoperability among eLearning systems; they must also use a standard API to exchange learning objects. This paper presents the design and implementation of a service oriented repository of learning objects called crimsonHex. This repository is fully compliant with the existing interoperability standards and supports new definitions of learning objects for specialized domains. We illustrate this feature with the definition of programming problems as learning objects and its validation by the repository. This repository is also prepared to store usage data on learning objects to tailor the presentation order and adapt it to learner profiles.
Resumo:
As the variety of mobile devices connected to the Internet growts there is a correponding increase in the need to deliver content tailored to their heterogeneous characteristics. At the same time, we watch to the increase of e-learning in universities through the adoption of electronic platforms and standards. Not surprisingly, the concept of mLearning (Mobile Learning) appeared in recent years decreasing the limitation of learning location with the mobility of general portable devices. However, this large number and variety of Web-enabled devices poses several challenges for Web content creators who want to automatic get the delivery context and adapt the content to the client mobile devices. In this paper we analyze several approaches to defining delivery context and present an architecture for deliver uniform mLearning content to mobile devices denominated eduMCA - Educational Mobile Content Adaptation. With the eduMCA system the Web authors will not need to create specialized pages for each kind of device, since the content is automatically transformed to adapt to any mobile device capabilities from WAP to XHTML MP-compliant devices.
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Informática, pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Background - Medical image perception research relies on visual data to study the diagnostic relationship between observers and medical images. A consistent method to assess visual function for participants in medical imaging research has not been developed and represents a significant gap in existing research. Methods - Three visual assessment factors appropriate to observer studies were identified: visual acuity, contrast sensitivity, and stereopsis. A test was designed for each, and 30 radiography observers (mean age 31.6 years) participated in each test. Results - Mean binocular visual acuity for distance was 20/14 for all observers. The difference between observers who did and did not use corrective lenses was not statistically significant (P = .12). All subjects had a normal value for near visual acuity and stereoacuity. Contrast sensitivity was better than population norms. Conclusion - All observers had normal visual function and could participate in medical imaging visual analysis studies. Protocols of evaluation and populations norms are provided. Further studies are necessary to understand fully the relationship between visual performance on tests and diagnostic accuracy in practice.
Resumo:
The latest medical diagnosis devices enable the performance of e-diagnosis making the access to these services easier, faster and available in remote areas. However this imposes new communications and data interchange challenges. In this paper a new XML based format for storing cardiac signals and related information is presented. The proposed structure encompasses data acquisition devices, patient information, data description, pathological diagnosis and waveform annotation. When compared with similar purpose formats several advantages arise. Besides the full integrated data model it may also be noted the available geographical references for e-diagnosis, the multi stream data description, the ability to handle several simultaneous devices, the possibility of independent waveform annotation and a HL7 compliant structure for common contents. These features represent an enhanced integration with existent systems and an improved flexibility for cardiac data representation.
Resumo:
Current Learning Management Systems focus on the management of students, keeping track of their progress across all types of training activities. This type of systems lacks integration with other e-Learning systems. For instance, learning objects stored in a centralized repository are unavailable throughout an organization for potential reuse. In this paper we present the interoperability features of crimsonHex - a service oriented repository of learning objects - highlighting the use of XML languages. Its nteroperability features are compliant with the existing standards and we propose extensions to the IMS interoperability recommendation, adding new functions, formalizing an XML message interchange and providing also a REST interface. To validate the proposed extensions and its implementation in crimsonHex we designed two repository plugins for Moodle 2.0, the first of which is already implemented and is expected to be included in the next release of this popular learning management system.
Resumo:
In recent years, mobile learning has emerged as an educational approach to decrease the limitation of learning location and adapt the teaching-learning process to all type of students. However, the large number and variety of Web-enabled devices poses challenges for Web content creators who want to automatic get the delivery context and adapt the content to mobile devices. In this paper we study several approaches to adapt the learning content to mobile phones. We present an architecture for deliver uniform m-Learning content to students in a higher School. The system development is organized in two phases: firstly enabling the educational content to mobile devices and then adapting it to all the heterogeneous mobile platforms. With this approach, Web authors will not need to create specialized pages for each kind of device, since the content is automatically transformed to adapt to any mobile device capabilities from WAP to XHTML MP-compliant devices.
Resumo:
Dissertation submitted for obtainment of the Master’s Degree in Biotechnology, by the Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
Mathematical models and statistical analysis are key instruments in soil science scientific research as they can describe and/or predict the current state of a soil system. These tools allow us to explore the behavior of soil related processes and properties as well as to generate new hypotheses for future experimentation. A good model and analysis of soil properties variations, that permit us to extract suitable conclusions and estimating spatially correlated variables at unsampled locations, is clearly dependent on the amount and quality of data and of the robustness techniques and estimators. On the other hand, the quality of data is obviously dependent from a competent data collection procedure and from a capable laboratory analytical work. Following the standard soil sampling protocols available, soil samples should be collected according to key points such as a convenient spatial scale, landscape homogeneity (or non-homogeneity), land color, soil texture, land slope, land solar exposition. Obtaining good quality data from forest soils is predictably expensive as it is labor intensive and demands many manpower and equipment both in field work and in laboratory analysis. Also, the sampling collection scheme that should be used on a data collection procedure in forest field is not simple to design as the sampling strategies chosen are strongly dependent on soil taxonomy. In fact, a sampling grid will not be able to be followed if rocks at the predicted collecting depth are found, or no soil at all is found, or large trees bar the soil collection. Considering this, a proficient design of a soil data sampling campaign in forest field is not always a simple process and sometimes represents a truly huge challenge. In this work, we present some difficulties that have occurred during two experiments on forest soil that were conducted in order to study the spatial variation of some soil physical-chemical properties. Two different sampling protocols were considered for monitoring two types of forest soils located in NW Portugal: umbric regosol and lithosol. Two different equipments for sampling collection were also used: a manual auger and a shovel. Both scenarios were analyzed and the results achieved have allowed us to consider that monitoring forest soil in order to do some mathematical and statistical investigations needs a sampling procedure to data collection compatible to established protocols but a pre-defined grid assumption often fail when the variability of the soil property is not uniform in space. In this case, sampling grid should be conveniently adapted from one part of the landscape to another and this fact should be taken into consideration of a mathematical procedure.