24 resultados para Software Package Data Exchange (SPDX)
em Instituto Politécnico do Porto, Portugal
Resumo:
Introduction Myocardial Perfusion Imaging (MPI) is a very important tool in the assessment of Coronary Artery Disease ( CAD ) patient s and worldwide data demonstrate an increasingly wider use and clinical acceptance. Nevertheless, it is a complex process and it is quite vulnerable concerning the amount and type of possible artefacts, some of them affecting seriously the overall quality and the clinical utility of the obtained data. One of the most in convenient artefacts , but relatively frequent ( 20% of the cases ) , is relate d with patient motion during image acquisition . Mostly, in those situations, specific data is evaluated and a decisi on is made between A) accept the results as they are , consider ing that t he “noise” so introduced does not affect too seriously the final clinical information, or B) to repeat the acquisition process . Another possib ility could be to use the “ Motion Correcti on Software” provided within the software package included in any actual gamma camera. The aim of this study is to compare the quality of the final images , obtained after the application of motion correction software and after the repetition of image acqui sition. Material and Methods Thirty cases of MPI affected by Motion Artefacts and repeated , were used. A group of three, independent (blinded for the differences of origin) expert Nuclear Medicine Clinicians had been invited to evaluate the 30 sets of thre e images - one set for each patient - being ( A) original image , motion uncorrected , (B) original image, motion corrected, and (C) second acquisition image, without motion . The results so obtained were statistically analysed . Results and Conclusion Results obtained demonstrate that the use of the Motion Correction Software is useful essentiall y if the amplitude of movement is not too important (with this specific quantification found hard to define precisely , due to discrepancies between clinicians and other factors , namely between one to another brand); when that is not the case and the amplitude of movement is too important , the n the percentage of agreement between clinicians is much higher and the repetition of the examination is unanimously considered ind ispensable.
Resumo:
Multi-agent architectures are well suited for complex inherently distributed problem solving domains. From the many challenging aspects that arise within this framework, a crucial one emerges: how to incorporate dynamic and conflicting agent beliefs? While the belief revision activity in a single agent scenario is concentrated on incorporating new information while preserving consistency, in a multi-agent system it also has to deal with possible conflicts between the agents perspectives. To provide an adequate framework, each agent, built as a combination of an assumption based belief revision system and a cooperation layer, was enriched with additional features: a distributed search control mechanism allowing dynamic context management, and a set of different distributed consistency methodologies. As a result, a Distributed Belief Revision Testbed (DiBeRT) was developed. This paper is a preliminary report presenting some of DiBeRT contributions: a concise representation of external beliefs; a simple and innovative methodology to achieve distributed context management; and a reduced inter-agent data exchange format.
Resumo:
Belief revision is a critical issue in real world DAI applications. A Multi-Agent System not only has to cope with the intrinsic incompleteness and the constant change of the available knowledge (as in the case of its stand alone counterparts), but also has to deal with possible conflicts between the agents’ perspectives. Each semi-autonomous agent, designed as a combination of a problem solver – assumption based truth maintenance system (ATMS), was enriched with improved capabilities: a distributed context management facility allowing the user to dynamically focus on the more pertinent contexts, and a distributed belief revision algorithm with two levels of consistency. This work contributions include: (i) a concise representation of the shared external facts; (ii) a simple and innovative methodology to achieve distributed context management; and (iii) a reduced inter-agent data exchange format. The different levels of consistency adopted were based on the relevance of the data under consideration: higher relevance data (detected inconsistencies) was granted global consistency while less relevant data (system facts) was assigned local consistency. These abilities are fully supported by the ATMS standard functionalities.
Resumo:
Objectives : The purpose of this article is to find out differences between surveys using paper and online questionnaires. The author has deep knowledge in the case of questions concerning opinions in the development of survey based research, e.g. the limits of postal and online questionnaires. Methods : In the physician studies carried out in 1995 (doctors graduated in 1982-1991), 2000 (doctors graduated in 1982-1996), 2005 (doctors graduated in 1982-2001), 2011 (doctors graduated in 1977-2006) and 457 family doctors in 2000, were used paper and online questionnaires. The response rates were 64%, 68%, 64%, 49% and 73%, respectively. Results : The results of the physician studies showed that there were differences between methods. These differences were connected with using paper-based questionnaire and online questionnaire and response rate. The online-based survey gave a lower response rate than the postal survey. The major advantages of online survey were short response time; very low financial resource needs and data were directly loaded in the data analysis software, thus saved time and resources associated with the data entry process. Conclusions : The current article helps researchers with planning the study design and choosing of the right data collection method.
Resumo:
This paper presents the SmartClean tool. The purpose of this tool is to detect and correct the data quality problems (DQPs). Compared with existing tools, SmartClean has the following main advantage: the user does not need to specify the execution sequence of the data cleaning operations. For that, an execution sequence was developed. The problems are manipulated (i.e., detected and corrected) following that sequence. The sequence also supports the incremental execution of the operations. In this paper, the underlying architecture of the tool is presented and its components are described in detail. The tool's validity and, consequently, of the architecture is demonstrated through the presentation of a case study. Although SmartClean has cleaning capabilities in all other levels, in this paper are only described those related with the attribute value level.
Resumo:
O aumento da população Mundial, particularmente em Países emergentes como é o caso da China e da Índia, tem-se relevado um problema adicional no que confere às dificuldades associadas ao consumo mundial de energia, pois esta situação limita inequivocamente o acesso destes milhões de pessoas à energia eléctrica para os bens básicos de sobrevivência. Uma das muitas formas de se extinguir esta necessidade, começa a ser desenvolvida recorrendo ao uso de recursos renováveis como fontes de energia. Independentemente do local do mundo onde nos encontremos, essas fontes de energia são abundantes, inesgotáveis e gratuitas. O problema reside na forma como esses recursos renováveis são geridos em função das solicitações de carga que as instalações necessitam. Sistemas híbridos podem ser usados para produzir energia em qualquer parte do mundo. Historicamente este tipo de sistemas eram aplicados em locais isolados, mas nos dias que correm podem ser usados directamente conectados à rede, permitindo que se realize a venda de energia. Foi neste contexto que esta tese foi desenvolvida, com o objectivo de disponibilizar uma ferramenta informática capaz de calcular a rentabilidade de um sistema híbrido ligado à rede ou isolado. Contudo, a complexidade deste problema é muito elevada, pois existe uma extensa panóplia de características e distintos equipamentos que se pode adoptar. Assim, a aplicação informática desenvolvida teve de ser limitada e restringida aos dados disponíveis de forma a poder tornar-se genérica, mas ao mesmo tempo permitir ter uma aplicabilidade prática. O objectivo da ferramenta informática desenvolvida é apresentar de forma imediata os custos da implementação que um sistema híbrido pode acarretar, dependendo apenas de três variáveis distintas. A primeira variável terá de ter em consideração o local de instalação do sistema. Em segundo lugar é o tipo de ligação (isolado ou ligado à rede) e, por fim, o custo dos equipamentos (eólico, solar e restantes componentes) que serão introduzidos. Após a inserção destes dados a aplicação informática apresenta valores estimados de Payback e VAL.
Resumo:
O intuito principal desta Tese é criar um interface de Dados entre uma fonte de informação e fornecimento de Rotas para turistas e disponibilizar essa informação através de um sistema móvel interactivo de navegação e visualização desses mesmos dados. O formato tecnológico será portátil e orientado à mobilidade (PDA) e deverá ser prático, intuitivo e multi-facetado, permitindo boa usabilidade a públicos de várias faixas etárias. Haverá uma componente de IA (Inteligência Artificial), que irá usar a informação fornecida para tomar decisões ponderadas tendo em conta uma diversidade de aspectos. O Sistema a desenvolver deverá ser, assim, capaz de lidar com imponderáveis (alterações de rota, gestão de horários, cancelamento de pontos de visita, novos pontos de visita) e, finalmente, deverá ajudar o turista a gerir o seu tempo entre Pontos de Interesse (POI – Points os Interest). Deverá também permitir seguir ou não um dado percurso pré-definido, havendo possibilidade de cenários de exploração de POIs, sugeridos a partir de sugestões in loco, similares a Locais incluídos no trajecto, que se enquadravam no perfil dos Utilizadores. O âmbito geográfico de teste deste projecto será a zona ribeirinha do porto, por ser um ex-líbris da cidade e, simultaneamente, uma zona com muitos desafios ao nível geográfico (com a inclinação) e ao nível do grande número de Eventos e Locais a visitar.
Resumo:
Learning and teaching processes, like all human activities, can be mediated through the use of tools. Information and communication technologies are now widespread within education. Their use in the daily life of teachers and learners affords engagement with educational activities at any place and time and not necessarily linked to an institution or a certificate. In the absence of formal certification, learning under these circumstances is known as informal learning. Despite the lack of certification, learning with technology in this way presents opportunities to gather information about and present new ways of exploiting an individual’s learning. Cloud technologies provide ways to achieve this through new architectures, methodologies, and workflows that facilitate semantic tagging, recognition, and acknowledgment of informal learning activities. The transparency and accessibility of cloud services mean that institutions and learners can exploit existing knowledge to their mutual benefit. The TRAILER project facilitates this aim by providing a technological framework using cloud services, a workflow, and a methodology. The services facilitate the exchange of information and knowledge associated with informal learning activities ranging from the use of social software through widgets, computer gaming, and remote laboratory experiments. Data from these activities are shared among institutions, learners, and workers. The project demonstrates the possibility of gathering information related to informal learning activities independently of the context or tools used to carry them out.
Resumo:
Nowadays, due to the incredible grow of the mobile devices market, when we want to implement a client-server applications we must consider mobile devices limitations. In this paper we discuss which can be the more reliable and fast way to exchange information between a server and an Android mobile application. This is an important issue because with a responsive application the user experience is more enjoyable. In this paper we present a study that test and evaluate two data transfer protocols, socket and HTTP, and three data serialization formats (XML, JSON and Protocol Buffers) using different environments and mobile devices to realize which is the most practical and fast to use.
Resumo:
The recent trends of chip architectures with higher number of heterogeneous cores, and non-uniform memory/non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as a fundamental building block for developing parallel applications. Nevertheless, although STM promises to ease concurrent and parallel software development, it relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by embedded real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upper-bounded and task sets can be feasibly scheduled. In this paper we assess the use of STM in the development of embedded real-time software, defending that the amount of contention can be reduced if read-only transactions access recent consistent data snapshots, progressing in a wait-free manner. We show how the required number of versions of a shared object can be calculated for a set of tasks. We also outline an algorithm to manage conflicts between update transactions that prevents starvation.
Resumo:
The foreseen evolution of chip architectures to higher number of, heterogeneous, cores, with non-uniform memory and non-coherent caches, brings renewed attention to the use of Software Transactional Memory (STM) as an alternative to lock-based synchronisation. However, STM relies on the possibility of aborting conflicting transactions to maintain data consistency, which impacts on the responsiveness and timing guarantees required by real-time systems. In these systems, contention delays must be (efficiently) limited so that the response times of tasks executing transactions are upperbounded and task sets can be feasibly scheduled. In this paper we defend the role of the transaction contention manager to reduce the number of transaction retries and to help the real-time scheduler assuring schedulability. For such purpose, the contention management policy should be aware of on-line scheduling information.
Resumo:
Com o crescente aumento da Teleradiologia, sentiu-se necessidade de criar mais e melhores softwares para sustentar esse crescimento. O presente trabalho pretende abordar a temática da certificação de software e a sua marcação CE, pois para dar entrada no mercado Europeu todos os Dispositivos Médicos (DM) têm de estar devidamente certificados. Para efetuar a marcação CE e a certificação serão estudadas normas e normativos adequados para marcação de DM ao nível Europeu e também dos Estados Unidos da América. A temática da segurança de dados pessoais será também estudada de forma a assegurar que o dispositivo respeite a legislação em vigor. Este estudo tem como finalidade a certificação de um software proprietário da efficientia sysPACS, um serviço online abrangente, que permite a gestão integrada do armazenamento e distribuição de imagens médicas para apoio ao diagnóstico.
Resumo:
Most of the traditional software and database development approaches tend to be serial, not evolutionary and certainly not agile, especially on data-oriented aspects. Most of the more commonly used methodologies are strict, meaning they’re composed by several stages each with very specific associated tasks. A clear example is the Rational Unified Process (RUP), divided into Business Modeling, Requirements, Analysis & Design, Implementation, Testing and Deployment. But what happens when the needs of a well design and structured plan, meet the reality of a small starting company that aims to build an entire user experience solution. Here resource control and time productivity is vital, requirements are in constant change, and so is the product itself. In order to succeed in this environment a highly collaborative and evolutionary development approach is mandatory. The implications of constant changing requirements imply an iterative development process. Project focus is on Data Warehouse development and business modeling. This area is usually a tricky one. Business knowledge is part of the enterprise, how they work, their goals, what is relevant for analyses are internal business processes. Throughout this document it will be explained why Agile Modeling development was chosen. How an iterative and evolutionary methodology, allowed for reasonable planning and documentation while permitting development flexibility, from idea to product. More importantly how it was applied on the development of a Retail Focused Data Warehouse. A productized Data Warehouse built on the knowledge of not one but several client needs. One that aims not just to store usual business areas but create an innovative sets of business metrics by joining them with store environment analysis, converting Business Intelligence into Actionable Business Intelligence.
Resumo:
O crescente aumento da consciencialização da importância da fase de operação e manutenção, bem como a amplificação que a metodologia Building Information Modelling (BIM) tem obtido nos últimos anos, sugere uma necessidade de alterar a atual abordagem da gestão das instalações de forma a dotá-la das mais recentes inovações tecnológicas como seja a utilização do BIM. Os Building Information Models apresentam as características ideais para a integração da gestão das instalações, não só pela visualização do edifício, mas sobretudo pela potencialidade que a base de dados oferece, com informação referente a cada um dos componentes presentes e suas relações. O âmbito deste trabalho envolve assim a integração da gestão das instalações com o modelo BIM criado, representativo do edifício em estudo. Este trabalho começa com as definições do âmbito e dos objetivos que são propostos no Capítulo 1. No Capítulo 2, é elaborada uma pesquisa sobre o estado da arte atual de cada uma das metodologias BIM e FM, de forma a tomar conhecimento dos seus conceitos principais. Foi feito também um levantamento no campo do BIM-FM de forma a apurar as atuais soluções tecnológicas existentes, a forma como é feita a sua troca de informação e também alguns casos em que esta metodologia foi aplicada. Com base na informação recolhida sobre as metodologias e também nos casos práticos estudados, é realizado no Capítulo 3, capítulo central deste trabalho, a aplicação prática. A realização desta aplicação é dividida por 3 fases principais. Numa primeira fase é especificada e recolhida a informação necessária de ser obtida para a realização do modelo e a posterior aplicação do FM. A escolha da informação a recolher é feita ponderando todos os fatores existentes, mas de forma a cumprir os requisitos pedidos. Numa segunda fase, assente na compilação de informação recolhida anteriormente, realiza-se o modelo do edifício. A modelação, de forma a seguir o método de trabalho BIM é realizada por especialidades, sendo numa primeira fase realizada a especialidade de arquitetura e posteriormente, utilizando esse modelo como base, é feita a modelação das especialidades de águas, águas residuais, AVAC e eletricidade. Esta escolha foi também estimulada pela organização do software utilizado para a modelação, por módulos. Na última fase da aplicação do caso prático a informação inserida na fase de modelação do edifício é exportada para o software de FM, neste caso em específico, o IBM Maximo. Para a exportação destes dados foi utilizado o formato Construction Operations Building Information Exchange (COBie), de forma a garantir a integridade e conformidade da informação transferida. No Capítulo 4 deste trabalho são abordadas as especificidades relativas à informação existente, à modelação e à troca de dados entre o software de modelação e o software utilizado na gestão do edifício. São também sugeridos alguns temas para futuros desenvolvimentos com o intuito de ampliação dos campos de FM com o uso do modelo. O BIM-FM é um tema emergente na atualidade do BIM, sendo a sua utilização encarada como uma mais-valia ao processo BIM. A compilação da informação durante a fase de projeto e execução, aliada à existência do modelo torna a implementação do FM com o modelo BIM como uma sequência natural.
Resumo:
Develop a client-server application for a mobile environment can bring many challenges because of the mobile devices limitations. So, in this paper is discussed what can be the more reliable way to exchange information between a server and an Android mobile application, since it is important for users to have an application that really works in a responsive way and preferably without any errors. In this discussion two data transfer protocols (Socket and HTTP) and three serialization data formats (XML, JSON and Protocol Buffers) were tested using some metrics to evaluate which is the most practical and fast to use.