915 resultados para Open source information retrieval
Resumo:
Sistemas Integrados de Gestão ou Enterprise Resources Planning - ERP possibilitam o processamento das informações necessárias em uma empresa usando um único banco de dados. Muito tem se escrito sobre este tipo de software, abordando questões como o alto custo da aquisição de licenças, e a dependência de consultoria para a sua adaptação e implantação nas empresas. Atualmente vem crescendo o desenvolvimento e uso de ERP Livre de Código Aberto (FOS-ERP). Porém verifica-se que este tipo de sistema ainda não é suficientemente explorado, mesmo no meio acadêmico. Este artigo relata alguns trabalhos publicados sobre o assunto e levanta questões que devem ser tratadas por pesquisadores e demais interessados para adequar e viabilizar o uso desses sistemas conforme a realidade nacional. Assim, após uma introdução ao tema, são apresentadas algumas diferenças entre o FOS-ERP e seus equivalentes proprietários (Proprietary ERP ou P-ERP) em termos de modelos de negócios, seleção, customização e evolução. em seguida são elencados alguns desafios e oportunidades que o FOS-ERP pode oferecer a usuários, fornecedores, pesquisadores e colaboradores individuais. Concluindo, este artigo busca ampliar a discussão sobre FOS-ERP, destacando fatores tais como seu potencial de inovação tecnológica e estratégias de negócios.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Software Repository Mining (MSR) is a research area that analyses software repositories in order to derive relevant information for the research and practice of software engineering. The main goal of repository mining is to extract static information from repositories (e.g. code repository or change requisition system) into valuable information providing a way to support the decision making of software projects. On the other hand, another research area called Process Mining (PM) aims to find the characteristics of the underlying process of business organizations, supporting the process improvement and documentation. Recent works have been doing several analyses through MSR and PM techniques: (i) to investigate the evolution of software projects; (ii) to understand the real underlying process of a project; and (iii) create defect prediction models. However, few research works have been focusing on analyzing the contributions of software developers by means of MSR and PM techniques. In this context, this dissertation proposes the development of two empirical studies of assessment of the contribution of software developers to an open-source and a commercial project using those techniques. The contributions of developers are assessed through three different perspectives: (i) buggy commits; (ii) the size of commits; and (iii) the most important bugs. For the opensource project 12.827 commits and 8.410 bugs have been analyzed while 4.663 commits and 1.898 bugs have been analyzed for the commercial project. Our results indicate that, for the open source project, the developers classified as core developers have contributed with more buggy commits (although they have contributed with the majority of commits), more code to the project (commit size) and more important bugs solved while the results could not indicate differences with statistical significance between developer groups for the commercial project
Resumo:
Whereas genome sequencing defines the genetic potential of an organism, transcript sequencing defines the utilization of this potential and links the genome with most areas of biology. To exploit the information within the human genome in the fight against cancer, we have deposited some two million expressed sequence tags (ESTs) from human tumors and their corresponding normal tissues in the public databases. The data currently define approximate to23,500 genes, of which only approximate to1,250 are still represented only by ESTs. Examination of the EST coverage of known cancer-related (CR) genes reveals that <1% do not have corresponding ESTs, indicating that the representation of genes associated with commonly studied tumors is high. The careful recording of the origin of all ESTs we have produced has enabled detailed definition of where the genes they represent are expressed in the human body. More than 100,000 ESTs are available for seven tissues, indicating a surprising variability of gene usage that has led to the discovery of a significant number of genes with restricted expression, and that may thus be therapeutically useful. The ESTs also reveal novel nonsynonymous germline variants (although the one-pass nature of the data necessitates careful validation) and many alternatively spliced transcripts. Although widely exploited by the scientific community, vindicating our totally open source policy, the EST data generated still provide extensive information that remains to be systematically explored, and that may further facilitate progress toward both the understanding and treatment of human cancers.
Resumo:
Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In some applications with case-based system, the attributes available for indexing are better described as linguistic variables instead of receiving numerical treatment. In these applications, the concept of fuzzy hypercube can be applied to give a geometrical interpretation of similarities among cases. This paper presents an approach that uses geometrical properties of fuzzy hypercube space to make indexing and retrieval processes of cases.
Resumo:
Modeling ERP software means capturing the information necessary for supporting enterprise management. This modeling process goes down through different abstraction layers, from enterprise modeling to code generation. Thus ERP is the kind of system where enterprise engineering undoubtedly has, or should have, a strong influence. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools can jeopardize the advantage brought by source code availability. Therefore, the aim of this paper is to present a development process proposal for the Open Source ERP5 system. The proposed development process aims to cover different abstraction levels, taking into account well established standards and common practices, as well as platform issues. Its main goal is to provide an adaptable meta-process to ERP5 adopters. © 2006 IEEE.
Resumo:
The design and implementation of an ERP system involves capturing the information necessary for implementing the system's structure and behavior that support enterprise management. This process should start on the enterprise modeling level and finish at the coding level, going down through different abstraction layers. For the case of Free/Open Source ERP, the lack of proper modeling methods and tools jeopardizes the advantages of source code availability. Moreover, the distributed, decentralized decision-making, and source-code driven development culture of open source communities, generally doesn't rely on methods for modeling the higher abstraction levels necessary for an ERP solution. The aim of this paper is to present a model driven development process for the open source ERP ERP5. The proposed process covers the different abstraction levels involved, taking into account well established standards and common practices, as well as new approaches, by supplying Enterprise, Requirements, Analysis, Design, and Implementation workflows. Copyright 2008 ACM.
Resumo:
In dealing with computer networks, these allow the flow of information through the resources of various equipment's. This work describes the implementation through the encapsulation of Protocol DNP3, usually employed in Smart Grid communication, in a simulator of discrete events. The NS-2 is a simulator in open source of network events, that facilitate the development of communication networks scenarios considering the protocols involved, in wireless or wired technologies. The objective of this work is to develop the DNP3 protocol encapsulation over a TCP/IP in the in the discrete event Simulator NS-2, allowing an analysis of behavior of a middle or large network sized in Smart Grid applications. © 2013 IEEE.
Resumo:
Pós-graduação em Televisão Digital: Informação e Conhecimento - FAAC
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Information retrieval has been much discussed within Information Science lately. The search for quality information compatible with the users needs became the object of constant research.Using the Internet as a source of dissemination of knowledge has suggested new models of information storage, such as digital repositories, which have been used in academic research as the main form of autoarchiving and disseminating information, but with an information structure that suggests better descriptions of resources and hence better retrieval.Thus the objective is to improve the process of information retrieval, presenting a proposal for a structural model in the context of the semantic web, addressing the use of web 2.0 and web 3.0 in digital repositories, enabling semantic retrieval of information through building a data layer called Iterative Representation.The present study is characterized as descriptive and analytical, based on document analysis, divided into two parts: the first, characterized by direct observation of non-participatory tools that implement digital repositories, as well as digital repositories already instantiated, and the second with scanning feature, which suggests an innovative model for repositories, with the use of structures of knowledge representation and user participation in building a vocabulary domain. The model suggested and proposed ─ Iterative Representation ─ will allow to tailor the digital repositories using Folksonomy and also controlled vocabulary of the field in order to generate a data layer iterative, which allows feedback information, and semantic retrieval of information, through the structural model designed for repositories. The suggested model resulted in the formulation of the thesis that through Iterative Representation it is possible to establish a process of semantic retrieval of information in digital repositories.
Resumo:
The indexing process aims to represent synthetically the informational content of documents by a set of terms whose meanings indicate the themes or subjects treated by them. With the emergence of the Web, research in automatic indexing received major boost with the necessity of retrieving documents from this huge collection. The traditional indexing languages, used to translate the thematic content of documents in standardized terms, always proved efficient in manual indexing. Ontologies open new perspectives for research in automatic indexing, offering a computer-process able language restricted to a particular domain. The use of ontologies in the automatic indexing process allows using a specific domain language and a logical and conceptual framework to make inferences, and whose relations allow an expansion of the terms extracted directly from the text of the document. This paper presents techniques for the construction and use of ontologies in the automatic indexing process. We conclude that the use of ontologies in the indexing process allows to add not only new feature to the indexing process, but also allows us to think in new and advanced features in an information retrieval system.