970 resultados para Open Source BI Platforms
Resumo:
With the development of the Internet-of-Things, more and more IoT platforms come up with different structures and characteristics. Making balance of their advantages and disadvantages, we should choose the suitable platform in differ- ent scenarios. For this project, I make comparison of a cloud-based centralized platform, Microsoft Azure IoT hub and a fully distributed platform, Sensi- bleThings. Quantitative comparison is made for performance by 2 scenarios, messages sending speed adds up, devices lie in different location. General com- parison is made for security, utilization and the storage. Finally I draw the con- clusion that SensibleThings performs more stable when a lot of messages push- es to the platform. Microsoft Azure has better geographic expansion. For gener- al comparison, Microsoft Azure IoT hub has better security. The requirement of local device for Microsoft Azure IoT hub is lower than SensibleThings. The SensibleThings are open source and free while Microsoft Azure follow the con- cept “pay as you go” with many throttling limitations for different editions. Microsoft is more user-friendly.
Resumo:
Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es
Resumo:
Due to the growth of design size and complexity, design verification is an important aspect of the Logic Circuit development process. The purpose of verification is to validate that the design meets the system requirements and specification. This is done by either functional or formal verification. The most popular approach to functional verification is the use of simulation based techniques. Using models to replicate the behaviour of an actual system is called simulation. In this thesis, a software/data structure architecture without explicit locks is proposed to accelerate logic gate circuit simulation. We call thus system ZSIM. The ZSIM software architecture simulator targets low cost SIMD multi-core machines. Its performance is evaluated on the Intel Xeon Phi and 2 other machines (Intel Xeon and AMD Opteron). The aim of these experiments is to: • Verify that the data structure used allows SIMD acceleration, particularly on machines with gather instructions ( section 5.3.1). • Verify that, on sufficiently large circuits, substantial gains could be made from multicore parallelism ( section 5.3.2 ). • Show that a simulator using this approach out-performs an existing commercial simulator on a standard workstation ( section 5.3.3 ). • Show that the performance on a cheap Xeon Phi card is competitive with results reported elsewhere on much more expensive super-computers ( section 5.3.5 ). To evaluate the ZSIM, two types of test circuits were used: 1. Circuits from the IWLS benchmark suit [1] which allow direct comparison with other published studies of parallel simulators.2. Circuits generated by a parametrised circuit synthesizer. The synthesizer used an algorithm that has been shown to generate circuits that are statistically representative of real logic circuits. The synthesizer allowed testing of a range of very large circuits, larger than the ones for which it was possible to obtain open source files. The experimental results show that with SIMD acceleration and multicore, ZSIM gained a peak parallelisation factor of 300 on Intel Xeon Phi and 11 on Intel Xeon. With only SIMD enabled, ZSIM achieved a maximum parallelistion gain of 10 on Intel Xeon Phi and 4 on Intel Xeon. Furthermore, it was shown that this software architecture simulator running on a SIMD machine is much faster than, and can handle much bigger circuits than a widely used commercial simulator (Xilinx) running on a workstation. The performance achieved by ZSIM was also compared with similar pre-existing work on logic simulation targeting GPUs and supercomputers. It was shown that ZSIM simulator running on a Xeon Phi machine gives comparable simulation performance to the IBM Blue Gene supercomputer at very much lower cost. The experimental results have shown that the Xeon Phi is competitive with simulation on GPUs and allows the handling of much larger circuits than have been reported for GPU simulation. When targeting Xeon Phi architecture, the automatic cache management of the Xeon Phi, handles and manages the on-chip local store without any explicit mention of the local store being made in the architecture of the simulator itself. However, targeting GPUs, explicit cache management in program increases the complexity of the software architecture. Furthermore, one of the strongest points of the ZSIM simulator is its portability. Note that the same code was tested on both AMD and Xeon Phi machines. The same architecture that efficiently performs on Xeon Phi, was ported into a 64 core NUMA AMD Opteron. To conclude, the two main achievements are restated as following: The primary achievement of this work was proving that the ZSIM architecture was faster than previously published logic simulators on low cost platforms. The secondary achievement was the development of a synthetic testing suite that went beyond the scale range that was previously publicly available, based on prior work that showed the synthesis technique is valid.
Resumo:
Esta dissertação apresenta um projecto em engenharia de software para o desenvolvimento e implementação ·De um módulo parte integrante da plataforma XEO, denominado XEOReports. Este módulo destina-se à construção de relatórios dinâmicos, no formato. pdf tendo como base ecrãs de edição da plataforma XEO. Foi utilizada uma plataforma de geração de relatórios em diversos formatos, de nome JasperReports, de forma a que o módulo desenvolvido fosse a integração entre as duas plataformas, XEO e JasperReports. O desenvolvimento deste módulo foi feito tendo em conta os requisitos que a plataforma jasperReports apresentava para a geração de relatórios tendo como base os ecrãs da plataforma XEO. O estudo foi feito respeitando a metodologia de desenvolvimento de software UML, respeitando as boas práticas de desenvolvimento de software a ela inerentes. ABSTRACT; This thesis consists in a software engineering project that deals with the development, functioning and implementation of the XEOReports module, which later became a component of the XEO platform. The XEOReports module aims the construction of dynamic reports in the Portable Document Format (PDF), based on edition screens of the XEO platform. JasperReports, an open source reporting engine, which generates reports in several file formats, was also used in the project development. Therefore, the XEOReports module is the result of the two platforms integration, namely XEO and JasperReports. It is also important to refer that this study took into account the JasperReports platform requirements in the creation of reports based on edition screens of the XEO platform. Moreover, the development methodology of the UML software, as well as the good development software practices inherent in it, were respected and followed in the progression of this project.
Resumo:
FEA simulation of thermal metal cutting is central to interactive design and manufacturing. It is therefore relevant to assess the applicability of FEA open software to simulate 2D heat transfer in metal sheet laser cuts. Application of open source code (e.g. FreeFem++, FEniCS, MOOSE) makes possible additional scenarios (e.g. parallel, CUDA, etc.), with lower costs. However, a precise assessment is required on the scenarios in which open software can be a sound alternative to a commercial one. This article contributes in this regard, by presenting a comparison of the aforementioned freeware FEM software for the simulation of heat transfer in thin (i.e. 2D) sheets, subject to a gliding laser point source. We use the commercial ABAQUS software as the reference to compare such open software. A convective linear thin sheet heat transfer model, with and without material removal is used. This article does not intend a full design of computer experiments. Our partial assessment shows that the thin sheet approximation turns to be adequate in terms of the relative error for linear alumina sheets. Under mesh resolutions better than 10e−5 m , the open and reference software temperature differ in at most 1 % of the temperature prediction. Ongoing work includes adaptive re-meshing, nonlinearities, sheet stress analysis and Mach (also called ‘relativistic’) effects.
Resumo:
This panel presentation provided several use cases that detail the complexity of large-scale digital library system (DLS) migration from the perspective of three university libraries and a statewide academic library services consortium. Each described the methodologies developed at the beginning of their migration process, the unique challenges that arose along the way, how issues were managed, and the outcomes of their work. Florida Atlantic University, Florida International University, and the University of Central Florida are members of the state's academic library services consortium, the Florida Virtual Campus (FLVC). In 2011, the Digital Services Committee members began exploring alternatives to DigiTool, their shared FLVC hosted DLS. After completing a review of functional requirements and existing systems, the universities and FLVC began the implementation process of their chosen platforms. Migrations began in 2013 with limited sets of materials. As functionalities were enhanced to support additional categories of materials from the legacy system, migration paths were created for the remaining materials. Some of the challenges experienced with the institutional and statewide collaborative legacy collections were due to gradual changes in standards, technology, policies, and personnel. This was manifested in the quality of original digital files and metadata, as well as collection and record structures. Additionally, the complexities involved with multiple institutions collaborating and compromising throughout the migration process, as well as the move from a consortial support structure with a vendor solution to open source systems (both locally and consortially supported), presented their own sets of unique challenges. Following the presentation, the speakers discussed commonalities in their migration experience, including learning opportunities for future migrations.
Resumo:
The evolution of modern and increasingly sensitive image sensors, the increasingly compact design of the cameras, and the recent emergence of low-cost cameras allowed the Underwater Photogrammetry to become an infallible and irreplaceable technique used to estimate the structure of the seabed with high accuracy. Within this context, the main topic of this work is the Underwater Photogrammetry from a geomatic point of view and all the issues associated with its implementation, in particular with the support of Unmanned Underwater Vehicles. Questions such as: how does the technique work, what is needed to deal with a proper survey, what tools are available to apply this technique, and how to resolve uncertainties in measurement will be the subject of this thesis. The study conducted can be divided into two major parts: one devoted to several ad-hoc surveys and tests, thus a practical part, another supported by the bibliographical research. However the main contributions are related to the experimental section, in which two practical case studies are carried out in order to improve the quality of the underwater survey of some calibration platforms. The results obtained from these two experiments showed that, the refractive effects due to water and underwater housing can be compensated by the distortion coefficients in the camera model, but if the aim is to achieve high accuracy then a model that takes into account the configuration of the underwater housing, based on ray tracing, must also be coupled. The major contributions that this work brought are: an overview of the practical issues when performing surveys exploiting an UUV prototype, a method to reach a reliable accuracy in the 3D reconstructions without the use of an underwater local geodetic network, a guide for who addresses underwater photogrammetry topics for the first time, and the use of open-source environments.
Resumo:
The Structural Genomics Consortium (SGC) and its clinical, industry and disease-foundation partners are launching open-source preclinical translational medicine studies.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física
Resumo:
Background: Feature selection is a pattern recognition approach to choose important variables according to some criteria in order to distinguish or explain certain phenomena (i.e., for dimensionality reduction). There are many genomic and proteomic applications that rely on feature selection to answer questions such as selecting signature genes which are informative about some biological state, e. g., normal tissues and several types of cancer; or inferring a prediction network among elements such as genes, proteins and external stimuli. In these applications, a recurrent problem is the lack of samples to perform an adequate estimate of the joint probabilities between element states. A myriad of feature selection algorithms and criterion functions have been proposed, although it is difficult to point the best solution for each application. Results: The intent of this work is to provide an open-source multiplataform graphical environment for bioinformatics problems, which supports many feature selection algorithms, criterion functions and graphic visualization tools such as scatterplots, parallel coordinates and graphs. A feature selection approach for growing genetic networks from seed genes ( targets or predictors) is also implemented in the system. Conclusion: The proposed feature selection environment allows data analysis using several algorithms, criterion functions and graphic visualization tools. Our experiments have shown the software effectiveness in two distinct types of biological problems. Besides, the environment can be used in different pattern recognition applications, although the main concern regards bioinformatics tasks.
Resumo:
Thousands of Free and Open Source Software Projects (FSP) were, and continually are, created on the Internet. This scenario increases the number of opportunities to collaborate to the same extent that it promotes competition for users and contributors, who can guide projects to superior levels, unachievable by founders alone. Thus, given that the main goal of FSP founders is to improve their projects by means of collaboration, the importance to understand and manage the capacity of attracting users and contributors to the project is established. To support researchers and founders in this challenge, the concept of attractiveness is introduced in this paper, which develops a theoretical-managerial toolkit about the causes, indicators and consequences of attractiveness, enabling its strategic management.
Resumo:
Neste paper, relata-se uma pr??tica de gest??o do conhecimento da Escola de Governo do Paran??: o SabeRES em Gest??o P??blica, um reposit??rio institucional de acesso livre que disponibiliza na Internet toda a produ????o t??cnica e cient??fica da Escola de Governo do Paran?? e de outras organiza????es de aprendizagem. Inicialmente discute-se a defini????o de conte??do e conhecimento, passando ao marco legal dos reposit??rios institucionais, com o Movimento Mundial Open Access. Em seguida, discute-se a tecnologia de software livre na cria????o de reposit??rios e as suas caracter??sticas e quais os sistemas de gest??o de conte??dos mais utilizados em open source. Na seq????ncia, apresenta-se a experi??ncia da Escola de Governo do Paran?? nas tr??s fases do SabeRES: na cria????o, na gest??o e na publica????o dos documentos. A metodologia utilizada constou de uma an??lise SWOT, seguida de planejamento gr??fico e estrutural, defini????o da arquitetura, constru????o da interface gr??fica com ferramenta XOOPS (software livre) e defini????o da ??rvore inicial de menus. O treinamento da equipe para a gest??o e manuten????o do reposit??rio foi imprescind??vel para a inser????o dos documentos e a disponibiliza????o do sistema na Internet. Por fim, uma C??mara T??cnica foi constitu??da para a aprecia????o dos materiais recebidos e para dar contribui????es de melhoria na gest??o do SabeRES.
Resumo:
O desenvolvimento de personagens digitais tridimensionais1 na área da animação, a constante procura por soluções tecnológicas convincentes, aliado a uma estética própria, tem contribuído para o sucesso e afirmação da animação tridimensional, na indústria do entretenimento. Contudo, toda a obra que procura ou explora a vertente digital/3D, torna-se ‘vitima’ das limitações do render2 aplicado a uma sequência de imagens, devido ao aumento dos custos financeiros e humanos, assim como da influência e dificuldade implicadas no cumprimento dos objectivos e prazos. O tempo real tem assumido, cada vez mais, um papel predominante na indústria da animação interactiva. Com a evolução da tecnologia surgiu a necessidade de procurar a metodologia apropriada que sirva de alavanca para o desenvolvimento de animações 3D em tempo real, através de softwares open-source ou de baixo orçamento, para a redução de custos, que possibilite simultaneamente descartar qualquer dependência do render na animação 3D. O desenvolvimento de personagens em tempo real, possibilita o surgimento de uma nova abordagem: a interactividade na arte de animar. Esta possibilita a introdução de um vasto leque de novas aplicações e consequentemente, contribui para o aumento do interesse e curiosidade por parte do espectador. No entanto, a inserção, implementação e (ab)uso da tecnologia na área da animação, levanta questões atuais sobre qual o papel do animador. Esta dissertação procura analisar estes aspectos, dando apoio ao projecto de animação 3D em tempo real, denominado ‘PALCO’.
Resumo:
A animação 3D de expressões faciais é uma tarefa complexa, o que aliado ao alto consumo de recursos do próprio hardware, torna o processo extremamente longo. Adicionando duas outras condicionantes, nomeadamente os cada vez mais baixos orçamentos praticados e a rapidez exigida pelos clientes, podem por em causa a sustentabilidade de um projeto de animação. Nesse sentido é necessário reunir esforços e investigar profundamente para tornar a animação 3D acessível a qualquer animador. É importante começar precisamente com softwares grátis, para salvaguardar uma despesa logo à partida, e open source, para qualquer programador poder igualmente dar asas à sua imaginação e qualquer tipo de extensão ou melhoria processual poder ser livremente adicionada. O atual paradigma dos softwares grátis e de código aberto na área de modelação e animação 3D é o Blender 3D e foi tomado como referência para qualquer especificação técnica.
Resumo:
This paper presents a catalog of smells in the context of interactive applications. These so-called usability smells are indicators of poor design on an application’s user interface, with the potential to hinder not only its usability but also its maintenance and evolution. To eliminate such usability smells we discuss a set of program/usability refactorings. In order to validate the presented usability smells catalog, and the associated refactorings, we present a preliminary empirical study with software developers in the context of a real open source hospital management application. Moreover, a tool that computes graphical user interface behavior models, giving the applications’ source code, is used to automatically detect usability smells at the model level.