918 resultados para Open Source Software


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Timely assessment of the burden of HIV/AIDS is essential for policy setting and programme evaluation. In this report from the Global Burden of Disease Study 2015 (GBD 2015), we provide national estimates of levels and trends of HIV/AIDS incidence, prevalence, coverage of antiretroviral therapy (ART), and mortality for 195 countries and territories from 1980 to 2015. Methods For countries without high-quality vital registration data, we estimated prevalence and incidence with data from antenatal care clinics and population-based seroprevalence surveys, and with assumptions by age and sex on initial CD4 distribution at infection, CD4 progression rates (probability of progression from higher to lower CD4 cell-count category), on and off antiretroviral therapy (ART) mortality, and mortality from all other causes. Our estimation strategy links the GBD 2015 assessment of all-cause mortality and estimation of incidence and prevalence so that for each draw from the uncertainty distribution all assumptions used in each step are internally consistent. We estimated incidence, prevalence, and death with GBD versions of the Estimation and Projection Package (EPP) and Spectrum software originally developed by the Joint United Nations Programme on HIV/AIDS (UNAIDS). We used an open-source version of EPP and recoded Spectrum for speed, and used updated assumptions from systematic reviews of the literature and GBD demographic data. For countries with high-quality vital registration data, we developed the cohort incidence bias adjustment model to estimate HIV incidence and prevalence largely from the number of deaths caused by HIV recorded in cause-of-death statistics. We corrected these statistics for garbage coding and HIV misclassifi cation. Findings Global HIV incidence reached its peak in 1997, at 3·3 million new infections (95% uncertainty interval [UI] 3·1–3·4 million). Annual incidence has stayed relatively constant at about 2·6 million per year (range 2·5–2·8 million) since 2005, after a period of fast decline between 1997 and 2005. The number of people living with HIV/AIDS has been steadily increasing and reached 38·8 million (95% UI 37·6–40·4 million) in 2015. At the same time, HIV/AIDS mortality has been declining at a steady pace, from a peak of 1·8 million deaths (95% UI 1·7–1·9 million) in 2005, to 1·2 million deaths (1·1–1·3 million) in 2015. We recorded substantial heterogeneity in the levels and trends of HIV/AIDS across countries. Although many countries have experienced decreases in HIV/AIDS mortality and in annual new infections, other countries have had slowdowns or increases in rates of change in annual new infections. Interpretation Scale-up of ART and prevention of mother-to-child transmission has been one of the great successes of global health in the past two decades. However, in the past decade, progress in reducing new infections has been slow, development assistance for health devoted to HIV has stagnated, and resources for health in low-income countries have grown slowly. Achievement of the new ambitious goals for HIV enshrined in Sustainable Development Goal 3 and the 90-90-90 UNAIDS targets will be challenging, and will need continued eff orts from governments and international agencies in the next 15 years to end AIDS by 2030.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context: Obfuscation is a common technique used to protect software against mali- cious reverse engineering. Obfuscators manipulate the source code to make it harder to analyze and more difficult to understand for the attacker. Although different ob- fuscation algorithms and implementations are available, they have never been directly compared in a large scale study. Aim: This paper aims at evaluating and quantifying the effect of several different obfuscation implementations (both open source and commercial), to help developers and project manager to decide which one could be adopted. Method: In this study we applied 44 obfuscations to 18 subject applications covering a total of 4 millions lines of code. The effectiveness of these source code obfuscations has been measured using 10 code metrics, considering modularity, size and complexity of code. Results: Results show that some of the considered obfuscations are effective in mak- ing code metrics change substantially from original to obfuscated code, although this change (called potency of the obfuscation) is different on different metrics. In the pa- per we recommend which obfuscations to select, given the security requirements of the software to be protected.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Tutkimuskäyttöön tarkoitettujen rekombinanttiproteiinien tuottaminen fermentoimalla on yleinen menetelmä bioteollisuudessa. Mikrobit kasvatetaan fermentorissa, joka tarjoaa kontrolloidun kasvuympäristön ja sopivat tuotto-olosuhteet halutulle tuotteelle. Eräs fermentointimuodoista on korkeatuottoinen ja pitkäkestoinen panossyöttökasvatus, jossa saavutetaan panoskavatusta merkittävästi korkeampi solutiheys jatkamalla panosvaiheen jälkeen kasvua rajoittavan substraatin syöttöä. Laboratoriomittakaavassa fermentorikasvatusten tilavuudet vaihtelevat litrasta kymmeniin ja niissä kasvatusta seurataan sekä ohjataan joko fermentorista tai tietokoneesta. Tyypillisessä fermentointiprosessissa operaattori tarkkailee muun muassa vaahdonkorkeutta sekä käynnistää pumppuja olosuhteiden muuttuessa. Tällaiset tehtävät ovat teollisen mittakaavan laitteistoissa usein automatisoituja. Diplomityön tarkoituksena oli päivittää kahden Turun yliopiston biotekniikan laboratoriossa sijaitsevan BioFlo® -sarjan pöytäfermentorin MS-DOS -pohjainen tietokoneohjausohjelma nykyaikaiseksi ja lisätä siihen etäseuranta ja -ohjaus. Ohjelmaan oli tarkoitus liittää erillinen optinen solutiheysanturi, jonka lukemien häiriötä haluttiin myös vähentää signaalinkäsittelyllä. Lisäksi vaahdonestoaineen ja indusorin lisäykset haluttiin automatisoida panossyöttökasvatuksessa. Vaahdonkorkeuden havaitsemisen mahdollisuutta konenäön menetelmin haluttiin selvittää, jotta vaahdonestoaineen automaattiset lisäykset voitaisiin toteuttaa nettikameran syötteen perusteella. Koekasvatuksilla osoitettiin päivitetyn ohjausohjelman toimivan panos- ja panossyöttömuodoilla. Uuden käyttöliittymän avulla pystyttiin automatisoimaan panoskasvatuksen lisäykset ja syöttönopeuden muutokset sekä tunnistamaan kasvatusliuosten vaahdonkorkeutta vaahdonestoaineen lisäykseen riittävällä kahden senttimetrin tarkkuudella. Lisäksi käyttöliittymä mahdollisti kasvatuksen ohjauksen ja seurauksen myös etänä. Työssä kehitetty ohjausohjelma julkaistiin avoimena ohjelmana ilman etä- ja nettikameratoimintoja. Ohjelma toimii hyvin BioFlo® -sarjan fermentorien käyttöliittymänä, mutta avoimen lähdekoodin ansiosta kuka tahansa voi hyödyntää ohjelmaa pohjana myös uusissa projekteissa tai muissa fermentorimalleissa.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Graphical User Interface (GUI) is an integral component of contemporary computer software. A stable and reliable GUI is necessary for correct functioning of software applications. Comprehensive verification of the GUI is a routine part of most software development life-cycles. The input space of a GUI is typically large, making exhaustive verification difficult. GUI defects are often revealed by exercising parts of the GUI that interact with each other. It is challenging for a verification method to drive the GUI into states that might contain defects. In recent years, model-based methods, that target specific GUI interactions, have been developed. These methods create a formal model of the GUI’s input space from specification of the GUI, visible GUI behaviors and static analysis of the GUI’s program-code. GUIs are typically dynamic in nature, whose user-visible state is guided by underlying program-code and dynamic program-state. This research extends existing model-based GUI testing techniques by modelling interactions between the visible GUI of a GUI-based software and its underlying program-code. The new model is able to, efficiently and effectively, test the GUI in ways that were not possible using existing methods. The thesis is this: Long, useful GUI testcases can be created by examining the interactions between the GUI, of a GUI-based application, and its program-code. To explore this thesis, a model-based GUI testing approach is formulated and evaluated. In this approach, program-code level interactions between GUI event handlers will be examined, modelled and deployed for constructing long GUI testcases. These testcases are able to drive the GUI into states that were not possible using existing models. Implementation and evaluation has been conducted using GUITAR, a fully-automated, open-source GUI testing framework.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

An experimental and numerical study of turbulent fire suppression is presented. For this work, a novel and canonical facility has been developed, featuring a buoyant, turbulent, methane or propane-fueled diffusion flame suppressed via either nitrogen dilution of the oxidizer or application of a fine water mist. Flames are stabilized on a slot burner surrounded by a co-flowing oxidizer, which allows controlled delivery of either suppressant to achieve a range of conditions from complete combustion through partial and total flame quenching. A minimal supply of pure oxygen is optionally applied along the burner to provide a strengthened flame base that resists liftoff extinction and permits the study of substantially weakened turbulent flames. The carefully designed facility features well-characterized inlet and boundary conditions that are especially amenable to numerical simulation. Non-intrusive diagnostics provide detailed measurements of suppression behavior, yielding insight into the governing suppression processes, and aiding the development and validation of advanced suppression models. Diagnostics include oxidizer composition analysis to determine suppression potential, flame imaging to quantify visible flame structure, luminous and radiative emissions measurements to assess sooting propensity and heat losses, and species-based calorimetry to evaluate global heat release and combustion efficiency. The studied flames experience notable suppression effects, including transition in color from bright yellow to dim blue, expansion in flame height and structural intermittency, and reduction in radiative heat emissions. Still, measurements indicate that the combustion efficiency remains close to unity, and only near the extinction limit do the flames experience an abrupt transition from nearly complete combustion to total extinguishment. Measurements are compared with large eddy simulation results obtained using the Fire Dynamics Simulator, an open-source computational fluid dynamics software package. Comparisons of experimental and simulated results are used to evaluate the performance of available models in predicting fire suppression. Simulations in the present configuration highlight the issue of spurious reignition that is permitted by the classical eddy-dissipation concept for modeling turbulent combustion. To address this issue, simple treatments to prevent spurious reignition are developed and implemented. Simulations incorporating these treatments are shown to produce excellent agreement with the experimentally measured data, including the global combustion efficiency.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta dissertação apresenta um projecto em engenharia de software para o desenvolvimento e implementação ·De um módulo parte integrante da plataforma XEO, denominado XEOReports. Este módulo destina-se à construção de relatórios dinâmicos, no formato. pdf tendo como base ecrãs de edição da plataforma XEO. Foi utilizada uma plataforma de geração de relatórios em diversos formatos, de nome JasperReports, de forma a que o módulo desenvolvido fosse a integração entre as duas plataformas, XEO e JasperReports. O desenvolvimento deste módulo foi feito tendo em conta os requisitos que a plataforma jasperReports apresentava para a geração de relatórios tendo como base os ecrãs da plataforma XEO. O estudo foi feito respeitando a metodologia de desenvolvimento de software UML, respeitando as boas práticas de desenvolvimento de software a ela inerentes. ABSTRACT; This thesis consists in a software engineering project that deals with the development, functioning and implementation of the XEOReports module, which later became a component of the XEO platform. The XEOReports module aims the construction of dynamic reports in the Portable Document Format (PDF), based on edition screens of the XEO platform. JasperReports, an open source reporting engine, which generates reports in several file formats, was also used in the project development. Therefore, the XEOReports module is the result of the two platforms integration, namely XEO and JasperReports. It is also important to refer that this study took into account the JasperReports platform requirements in the creation of reports based on edition screens of the XEO platform. Moreover, the development methodology of the UML software, as well as the good development software practices inherent in it, were respected and followed in the progression of this project.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta memoria nos introduce en el mundo de los servidores de máquinas virtuales procurando no saltarse ningún paso, de una forma gráfica y sin necesidad de conocimientos previos sobre virtualización. Es una guía para la instalación y configuración de un centro de datos con las siguientes tecnologías de virtualización de la compañía VMware: vCenter Server y vSphere ESXi; y el almacenamiento en red open-source FreeNAS. Este despliegue se usará para poner a prueba el funcionamiento de la tecnología vMotion. vMotion es una tecnología para migrar en caliente una máquina virtual de un servidor de máquinas virtuales a otro, de forma transparente y sin desconexiones. Esta tecnología, con la potencia de los procesadores y el ancho de banda actual, es casi inocua al rendimiento de la máquina virtual, lo cual permite su aplicación en una gran diversidad de sectores.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El presente trabajo empleó herramientas de hardware y software de licencia libre para el establecimiento de una estación base celular (BTS) de bajo costo y fácil implementación. Partiendo de conceptos técnicos que facilitan la instalación del sistema OpenBTS y empleando el hardware USRP N210 (Universal Software Radio Peripheral) permitieron desplegar una red análoga al estándar de telefonía móvil (GSM). Usando los teléfonos móviles como extensiones SIP (Session Initiation Protocol) desde Asterisk, logrando ejecutar llamadas entre los terminales, mensajes de texto (SMS), llamadas desde un terminal OpenBTS hacia otra operadora móvil, entre otros servicios.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During the lifetime of a research project, different partners develop several research prototype tools that share many common aspects. This is equally true for researchers as individuals and as groups: during a period of time they often develop several related tools to pursue a specific research line. Making research prototype tools easily accessible to the community is of utmost importance to promote the corresponding research, get feedback, and increase the tools’ lifetime beyond the duration of a specific project. One way to achieve this is to build graphical user interfaces (GUIs) that facilitate trying tools; in particular, with web-interfaces one avoids the overhead of downloading and installing the tools. Building GUIs from scratch is a tedious task, in particular for web-interfaces, and thus it typically gets low priority when developing a research prototype. Often we opt for copying the GUI of one tool and modifying it to fit the needs of a new related tool. Apart from code duplication, these tools will “live” separately, even though we might benefit from having them all in a common environment since they are related. This work aims at simplifying the process of building GUIs for research prototypes tools. In particular, we present EasyInterface, a toolkit that is based on novel methodology that provides an easy way to make research prototype tools available via common different environments such as a web-interface, within Eclipse, etc. It includes a novel text-based output language that allows to present results graphically without requiring any knowledge in GUI/Web programming. For example, an output of a tool could be (a structured version of) “highlight line number 10 of file ex.c” and “when the user clicks on line 10, open a dialog box with the text ...”. The environment will interpret this output and converts it to corresponding visual e_ects. The advantage of using this approach is that it will be interpreted equally by all environments of EasyInterface, e.g., the web-interface, the Eclipse plugin, etc. EasyInterface has been developed in the context of the Envisage [5] project, and has been evaluated on tools developed in this project, which include static analyzers, test-case generators, compilers, simulators, etc. EasyInterface is open source and available at GitHub2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este relatório apresenta todo o trabalho desenvolvido na Portugal Telecom Inovação ao longo de 6 meses. Este projeto esteve inserido no produto Medigraf, o qual é uma plataforma de telemedicina, desenvolvida e comercializada pela Portugal Telecom Inovação, destinada a ser integrada em organizações de saúde. Um sistema de informação é um componente muito importante nas organizações de saúde. É através deste que toda a informação referente à organização é processada e comunicada. Para que um novo sistema, a ser incorporado na organização, seja capaz de atingir todas as suas potencialidades é necessário que haja uma integração e uma interoperabilidade total entre o novo sistema e o sistema de informação existente. Torna-se assim indispensável conseguir uma integração entre o Medigraf e o sistema de informação existente nas organizações de saúde. Para isso, é necessário apurar quais os requisitos necessários para haver uma integração e uma partilha de informação entre sistemas heterógenos, explicando o conceito de standards, interoperabilidade e terminologias. O estado da arte revelou que a integração entre sistemas heterogéneos em organizações de saúde é difícil de atingir. Das várias organizações existentes, destaquei a HL7 (Health Level Seven) pelos seus avanços nesta área e pelo desenvolvimento de duas versões de um standard de mediation de mensagens (HL7 v2.x e HL7 v3) com o objetivo de atingir uma interoperabilidade entre sistemas heterógenos. Com o estudo mais aprofundado do standard de mensagens HL7 v3, foi necessário adotar uma arquitetura/topologia de integração de forma a implementar o standard. Neste estudo, destaquei a família de soluções EAI (Enterprise Application Integration) como melhor solução. De modo a implementar o standard HL7 v3 com base na arquitetura escolhida, realizei um estudo sobre os softwares existentes. Desse estudo, resultou a escolha do Mirth Connect como melhor abordagem para implementação de uma interoperabilidade entre o Medigraf e um sistema de informação. Este software atua como um middleware de mediation na comunicação entre sistemas heterogéneos. Selecionei para implementação, dois casos de uso do standard, de modo a demonstrar a sua utilização. Nativamente, o Mirth Connect não suporta a validação das mensagens do standard HL7 v3, suportando apenas HL7 v2.x. O Mirth Connect, sendo um software Open Source, permitiu que eu pudesse desenvolver um método capaz de executar essa validação. O método foi publicado no fórum da Mirth Corporation, possibilitando a sua partilha. No final são tecidas algumas conclusões, referindo o trabalho futuro que pode ser desenvolvido.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2016

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506