915 resultados para Open source information retrieval
Resumo:
The Final Graduation submitted to qualify for the degree of Bachelor of Library and Information Science, with the title: Old National Bibliographical Books from 1830 to 1900 for the National Library of Costa Rica "Miguel Obregon Lizano," has raised the following objectives general: Identify, create a computerized catalog and investigate policies of conservation, preservation and loan in order to facilitate access and information retrieval, and dissemination of books published between 1830 to 1900 by a CDROM.According to the above objectives are to identify, select and separate, and integrate the National Bibliographical Old Books from 1830 to 1900, under investigation, determined in accordance with this study, a pioneer in the creation of bibliographic old in the National Library of Costa Rica "Miguel Obregon Lizano," a valuable amount of documents, which are not always available to (as) students (as), for lack of disclosure or because they are not represented in catalogs, consistent with recent technology dictates.According to research, it is considered that there is a lack of old collections, and therefore, the concept, organization and creation of such funds, reason leads them to testify that this would be one of the first forays into this subject, and thus, a great contribution to the National Library and for the field of librarianship and the country at large, as it has managed to create a source of access to information for the service (as) users (as): researchers (as), historians (as), anthropologists (as), and the community at large. Therefore, the fundamental purpose of this study the unquestionable usefulness of Old National Bibliographical Books for (as) users (as) researchers (as) of the National Library.
Resumo:
Los sistemas fotovoltaicos son fuentes emergentes de energías renovables que generan electricidad a partir de la radiación solar. El monitoreo de los sistemas fotovoltaicos aislados proporciona información necesaria que permite a sus propietarios mantener, operar y controlar estos sistemas, reduciendo los costes de operación y evitando indeseadas interrupciones en el suministro eléctrico de zonas aisladas. En este artículo, se propone el desarrollo de una plataforma para el monitoreo de sistemas fotovoltaicos aislados en el Ecuador con el objetivo fundamental de desarrollar una solución escalable, basada en el uso de software libre, en el empleo de sensores de bajo consumo y en el desarrollo de servicios web en la modalidad ‘Software as a Service’ (SaaS) para el procesamiento, gestión y publicación de información registrada y la creación de un innovador centro de control solar fotovoltaico en el Ecuador.
Resumo:
International audience
Resumo:
Natural language processing has achieved great success in a wide range of ap- plications, producing both commercial language services and open-source language tools. However, most methods take a static or batch approach, assuming that the model has all information it needs and makes a one-time prediction. In this disser- tation, we study dynamic problems where the input comes in a sequence instead of all at once, and the output must be produced while the input is arriving. In these problems, predictions are often made based only on partial information. We see this dynamic setting in many real-time, interactive applications. These problems usually involve a trade-off between the amount of input received (cost) and the quality of the output prediction (accuracy). Therefore, the evaluation considers both objectives (e.g., plotting a Pareto curve). Our goal is to develop a formal understanding of sequential prediction and decision-making problems in natural language processing and to propose efficient solutions. Toward this end, we present meta-algorithms that take an existent batch model and produce a dynamic model to handle sequential inputs and outputs. Webuild our framework upon theories of Markov Decision Process (MDP), which allows learning to trade off competing objectives in a principled way. The main machine learning techniques we use are from imitation learning and reinforcement learning, and we advance current techniques to tackle problems arising in our settings. We evaluate our algorithm on a variety of applications, including dependency parsing, machine translation, and question answering. We show that our approach achieves a better cost-accuracy trade-off than the batch approach and heuristic-based decision- making approaches. We first propose a general framework for cost-sensitive prediction, where dif- ferent parts of the input come at different costs. We formulate a decision-making process that selects pieces of the input sequentially, and the selection is adaptive to each instance. Our approach is evaluated on both standard classification tasks and a structured prediction task (dependency parsing). We show that it achieves similar prediction quality to methods that use all input, while inducing a much smaller cost. Next, we extend the framework to problems where the input is revealed incremen- tally in a fixed order. We study two applications: simultaneous machine translation and quiz bowl (incremental text classification). We discuss challenges in this set- ting and show that adding domain knowledge eases the decision-making problem. A central theme throughout the chapters is an MDP formulation of a challenging problem with sequential input/output and trade-off decisions, accompanied by a learning algorithm that solves the MDP.
Resumo:
Phylogenetic inference consist in the search of an evolutionary tree to explain the best way possible genealogical relationships of a set of species. Phylogenetic analysis has a large number of applications in areas such as biology, ecology, paleontology, etc. There are several criterias which has been defined in order to infer phylogenies, among which are the maximum parsimony and maximum likelihood. The first one tries to find the phylogenetic tree that minimizes the number of evolutionary steps needed to describe the evolutionary history among species, while the second tries to find the tree that has the highest probability of produce the observed data according to an evolutionary model. The search of a phylogenetic tree can be formulated as a multi-objective optimization problem, which aims to find trees which satisfy simultaneously (and as much as possible) both criteria of parsimony and likelihood. Due to the fact that these criteria are different there won't be a single optimal solution (a single tree), but a set of compromise solutions. The solutions of this set are called "Pareto Optimal". To find this solutions, evolutionary algorithms are being used with success nowadays.This algorithms are a family of techniques, which aren’t exact, inspired by the process of natural selection. They usually find great quality solutions in order to resolve convoluted optimization problems. The way this algorithms works is based on the handling of a set of trial solutions (trees in the phylogeny case) using operators, some of them exchanges information between solutions, simulating DNA crossing, and others apply aleatory modifications, simulating a mutation. The result of this algorithms is an approximation to the set of the “Pareto Optimal” which can be shown in a graph with in order that the expert in the problem (the biologist when we talk about inference) can choose the solution of the commitment which produces the higher interest. In the case of optimization multi-objective applied to phylogenetic inference, there is open source software tool, called MO-Phylogenetics, which is designed for the purpose of resolving inference problems with classic evolutionary algorithms and last generation algorithms. REFERENCES [1] C.A. Coello Coello, G.B. Lamont, D.A. van Veldhuizen. Evolutionary algorithms for solving multi-objective problems. Spring. Agosto 2007 [2] C. Zambrano-Vega, A.J. Nebro, J.F Aldana-Montes. MO-Phylogenetics: a phylogenetic inference software tool with multi-objective evolutionary metaheuristics. Methods in Ecology and Evolution. En prensa. Febrero 2016.
Resumo:
FEA simulation of thermal metal cutting is central to interactive design and manufacturing. It is therefore relevant to assess the applicability of FEA open software to simulate 2D heat transfer in metal sheet laser cuts. Application of open source code (e.g. FreeFem++, FEniCS, MOOSE) makes possible additional scenarios (e.g. parallel, CUDA, etc.), with lower costs. However, a precise assessment is required on the scenarios in which open software can be a sound alternative to a commercial one. This article contributes in this regard, by presenting a comparison of the aforementioned freeware FEM software for the simulation of heat transfer in thin (i.e. 2D) sheets, subject to a gliding laser point source. We use the commercial ABAQUS software as the reference to compare such open software. A convective linear thin sheet heat transfer model, with and without material removal is used. This article does not intend a full design of computer experiments. Our partial assessment shows that the thin sheet approximation turns to be adequate in terms of the relative error for linear alumina sheets. Under mesh resolutions better than 10e−5 m , the open and reference software temperature differ in at most 1 % of the temperature prediction. Ongoing work includes adaptive re-meshing, nonlinearities, sheet stress analysis and Mach (also called ‘relativistic’) effects.
Resumo:
Dissertação de Mestrado, Ciências da Linguagem, Faculdade de Ciências Humanas e Sociais, Universidade do Algarve, 2016
Resumo:
We present an advanced method to achieve natural modifications when applying a pitch shifting process to singing voice by modifying the spectral envelope of the audio ex- cerpt. To this end, an all-pole spectral envelope model has been selected to describe the global variations of the spectral envelope with the changes of the pitch. We performed a pitch shifting process of some sustained vowels with the envelope processing and without it, and compared both by means of a survey open to volunteers in our website.
Resumo:
This panel presentation provided several use cases that detail the complexity of large-scale digital library system (DLS) migration from the perspective of three university libraries and a statewide academic library services consortium. Each described the methodologies developed at the beginning of their migration process, the unique challenges that arose along the way, how issues were managed, and the outcomes of their work. Florida Atlantic University, Florida International University, and the University of Central Florida are members of the state's academic library services consortium, the Florida Virtual Campus (FLVC). In 2011, the Digital Services Committee members began exploring alternatives to DigiTool, their shared FLVC hosted DLS. After completing a review of functional requirements and existing systems, the universities and FLVC began the implementation process of their chosen platforms. Migrations began in 2013 with limited sets of materials. As functionalities were enhanced to support additional categories of materials from the legacy system, migration paths were created for the remaining materials. Some of the challenges experienced with the institutional and statewide collaborative legacy collections were due to gradual changes in standards, technology, policies, and personnel. This was manifested in the quality of original digital files and metadata, as well as collection and record structures. Additionally, the complexities involved with multiple institutions collaborating and compromising throughout the migration process, as well as the move from a consortial support structure with a vendor solution to open source systems (both locally and consortially supported), presented their own sets of unique challenges. Following the presentation, the speakers discussed commonalities in their migration experience, including learning opportunities for future migrations.
Resumo:
Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506
Resumo:
Los sistemas y la tecnología de información han sido una pieza clave en las organizaciones, estos buscan lograr un equilibrio junto con las estrategias empresariales, ya que de esta manera las empresas estarían en mejores condiciones para enfrentar los desafíos del mercado. (Morantes Leal y Miraidy Elena, 2007)1. Para abordar este tema, hemos decidido realizar un análisis de un sistema de información aplicado en la empresa Belta Ltda. para determinar la relación que existe entre la productividad y el uso de los sistemas empresariales. La información de este análisis está compuesta por 6 capítulos divididos de la siguiente manera: En el primer capítulo se muestra una introducción de los sistemas de información empresarial, la importancia del uso de las tecnologías, además se describe los objetivos de esta investigación, el alcance y vinculación de este proyecto con la línea de investigación de la escuela de administración de la universidad del Rosario. En el segundo capítulo se presenta el marco teórico; la descripción de los tipos de sistemas de información, y las metodologías utilizadas para la evaluación del uso de las tecnologías. Enseguida se describe la metodología utilizada para llevar a cabo esta investigación y las herramientas utilizadas para este caso de estudio en el capítulo tres. En el cuarto capítulo se muestra una descripción de la empresa, el organigrama, el entorno general del negocio, y se desarrolla la aplicación del documento guía; el modelo integral 5d`s, que consiste en realizar diferentes diagnósticos para determinar cómo se encuentra la empresa a nivel interno y externo. Finalmente, según el análisis y resultados obtenidos con esta investigación, se dan unas conclusiones finales y se proponen unas recomendaciones para la empresa en los últimos capítulos.
Resumo:
A replicação de base de dados tem como objectivo a cópia de dados entre bases de dados distribuídas numa rede de computadores. A replicação de dados é importante em várias situações, desde a realização de cópias de segurança da informação, ao balanceamento de carga, à distribuição da informação por vários locais, até à integração de sistemas heterogéneos. A replicação possibilita uma diminuição do tráfego de rede, pois os dados ficam disponíveis localmente possibilitando também o seu acesso no caso de indisponibilidade da rede. Esta dissertação baseia-se na realização de um trabalho que consistiu no desenvolvimento de uma aplicação genérica para a replicação de bases de dados a disponibilizar como open source software. A aplicação desenvolvida possibilita a integração de dados entre vários sistemas, com foco na integração de dados heterogéneos, na fragmentação de dados e também na possibilidade de adaptação a várias situações. ABSTRACT: Data replication is a mechanism to synchronize and integrate data between distributed databases over a computer network. Data replication is an important tool in several situations, such as the creation of backup systems, load balancing between various nodes, distribution of information between various locations, integration of heterogeneous systems. Replication enables a reduction in network traffic, because data remains available locally even in the event of a temporary network failure. This thesis is based on the work carried out to develop an application for database replication to be made accessible as open source software. The application that was built allows for data integration between various systems, with particular focus on, amongst others, the integration of heterogeneous data, the fragmentation of data, replication in cascade, data format changes between replicas, master/slave and multi master synchronization.
Resumo:
This paper presents our work at 2016 FIRE CHIS. Given a CHIS query and a document associated with that query, the task is to classify the sentences in the document as relevant to the query or not; and further classify the relevant sentences to be supporting, neutral or opposing to the claim made in the query. In this paper, we present two different approaches to do the classification. With the first approach, we implement two models to satisfy the task. We first implement an information retrieval model to retrieve the sentences that are relevant to the query; and then we use supervised learning method to train a classification model to classify the relevant sentences into support, oppose or neutral. With the second approach, we only use machine learning techniques to learn a model and classify the sentences into four classes (relevant & support, relevant & neutral, relevant & oppose, irrelevant & neutral). Our submission for CHIS uses the first approach.
Resumo:
Este artículo describe la puesta en funcionamiento de una herramienta de información geográfica para la gestión y planificación de recursos hídricos de Cataluña desarrollada mediante plataformas OpenSource. Esta herramienta ha de permitir responder a sucesos extremos como la sequía, facilitando de manera intuitiva y rápida elementos de evaluación y toma de decisiones. Este Sistema de Información Geográfica (SIG) de gestión de los recursos hídricos se ha desarrollado para obtener resultados a medida del cliente. Su interfaz ágil y sencilla, su capacidad multiusuario, su alto rendimiento y escalabilidad y la ausencia de costes de licencia hacen que, con una inversión limitada, se obtenga una amortización muy rápida. Cabe destacar la automatización de procesos sistemáticos, geoprocesos y análisis multicriterio definidos por el cliente, que le permiten ahorrar tiempo y recursos, así como aumentar la productividad.Palabras clave: Sistema de Información Geográfica (SIG), acceso abierto, gestión, agua, automatizaciónAbstractThis article describes the implementation of a geographical information tool developed on an OpenSource platform for the management and planning of water resources in Catalonia. This Geographic Information System (GIS) is designed to deliver fast and intuitive evaluation and decision making criteria in response to extreme events, such as drought. Its strong customization, user friendliness, multiuser capability, performance and scalability, together with its license-free condition, allow for an extremely fast return on investment. The embedded automation of user-defined systemic processes, geo-processes and multi-criteria analyses provide significant time and resource savings and productivity Key Words: Geographic Information System (GIS), Open Source, water supply management, automation