948 resultados para Open Source (OS)
Resumo:
Ce mémoire de maîtrise a été rédigé dans l’objectif d’explorer une inégalité. Une inégalité dans les pratiques liées à la saisie et l’exploitation des données utilisateur dans la sphère des technologies et services Web, plus particulièrement dans la sphère des GIS (Geographic Information Systems). En 2014, de nombreuses entreprises exploitent les données de leurs utilisateurs afin d’améliorer leurs services ou générer du revenu publicitaire. Du côté de la sphère publique et gouvernementale, ce changement n’a pas été effectué. Ainsi, les gouvernements fédéraux et municipaux sont démunis de données qui permettraient d’améliorer les infrastructures et services publics. Des villes à travers le monde essayent d’améliorer leurs services et de devenir « intelligentes » mais sont dépourvues de ressources et de savoir faire pour assurer une transition respectueuse de la vie privée et des souhaits des citadins. Comment une ville peut-elle créer des jeux de données géo-référencés sans enfreindre les droits des citadins ? Dans l’objectif de répondre à ces interrogations, nous avons réalisé une étude comparative entre l’utilisation d’OpenStreetMap (OSM) et de Google Maps (GM). Grâce à une série d’entretiens avec des utilisateurs de GM et d’OSM, nous avons pu comprendre les significations et les valeurs d’usages de ces deux plateformes. Une analyse mobilisant les concepts de l’appropriation, de l’action collective et des perspectives critiques variées nous a permis d’analyser nos données d’entretiens pour comprendre les enjeux et problèmes derrière l’utilisation de technologies de géolocalisation, ainsi que ceux liés à la contribution des utilisateurs à ces GIS. Suite à cette analyse, la compréhension de la contribution et de l’utilisation de ces services a été recontextualisée pour explorer les moyens potentiels que les villes ont d’utiliser les technologies de géolocalisation afin d’améliorer leurs infrastructures publiques en respectant leurs citoyens.
Resumo:
La révision du code est un procédé essentiel quelque soit la maturité d'un projet; elle cherche à évaluer la contribution apportée par le code soumis par les développeurs. En principe, la révision du code améliore la qualité des changements de code (patches) avant qu'ils ne soient validés dans le repertoire maître du projet. En pratique, l'exécution de ce procédé n'exclu pas la possibilité que certains bugs passent inaperçus. Dans ce document, nous présentons une étude empirique enquétant la révision du code d'un grand projet open source. Nous investissons les relations entre les inspections des reviewers et les facteurs, sur les plans personnel et temporel, qui pourraient affecter la qualité de telles inspections.Premiérement, nous relatons une étude quantitative dans laquelle nous utilisons l'algorithme SSZ pour détecter les modifications et les changements de code favorisant la création de bogues (bug-inducing changes) que nous avons lié avec l'information contenue dans les révisions de code (code review information) extraites du systéme de traçage des erreurs (issue tracking system). Nous avons découvert que les raisons pour lesquelles les réviseurs manquent certains bogues était corrélées autant à leurs caractéristiques personnelles qu'aux propriétés techniques des corrections en cours de revue. Ensuite, nous relatons une étude qualitative invitant les développeurs de chez Mozilla à nous donner leur opinion concernant les attributs favorables à la bonne formulation d'une révision de code. Les résultats de notre sondage suggèrent que les développeurs considèrent les aspects techniques (taille de la correction, nombre de chunks et de modules) autant que les caractéristiques personnelles (l'expérience et review queue) comme des facteurs influant fortement la qualité des revues de code.
Resumo:
Open access iiiovemerit and open source software movement plays an important role in creation of knowledge, knowledge management and knowledge dissemination. Scholarly communication and publishing are increasingly taking place in the electronic environment. With a growing proportion of the scholarly record now existing only in digital format, serious issues regarding access and preservation are being raised that are central to future scholarship. Institutional Repositories provide access to past. present and future scholarly literature and research documentation; ensures its preservation; assists users in discovery and use; and offers educational programs to enable users to develop lifelong literacy. This paper explores these aspects on how IR of Cochin University of Science & Technology supports scientific community for knowledge creation. knowledge Management, and knowledge dissemination.
Resumo:
In this paper, we describe an interdisciplinary project in which visualization techniques were developed for and applied to scholarly work from literary studies. The aim was to bring Christof Schöch's electronic edition of Bérardier de Bataut's Essai sur le récit (1776) to the web. This edition is based on the Text Encoding Initiative's XML-based encoding scheme (TEI P5, subset TEI-Lite). This now de facto standard applies to machine-readable texts used chiefly in the humanities and social sciences. The intention of this edition is to make the edited text freely available on the web, to allow for alternative text views (here original and modern/corrected text), to ensure reader-friendly annotation and navigation, to permit on-line collaboration in encoding and annotation as well as user comments, all in an open source, generically usable, lightweight package. These aims were attained by relying on a GPL-based, public domain CMS (Drupal) and combining it with XSL-Stylesheets and Java Script.
Resumo:
Fujaba is an Open Source UML CASE tool project started at the software engineering group of Paderborn University in 1997. In 2002 Fujaba has been redesigned and became the Fujaba Tool Suite with a plug-in architecture allowing developers to add functionality easily while retaining full control over their contributions. Multiple Application Domains Fujaba followed the model-driven development philosophy right from its beginning in 1997. At the early days, Fujaba had a special focus on code generation from UML diagrams resulting in a visual programming language with a special emphasis on object structure manipulating rules. Today, at least six rather independent tool versions are under development in Paderborn, Kassel, and Darmstadt for supporting (1) reengineering, (2) embedded real-time systems, (3) education, (4) specification of distributed control systems, (5) integration with the ECLIPSE platform, and (6) MOF-based integration of system (re-) engineering tools. International Community According to our knowledge, quite a number of research groups have also chosen Fujaba as a platform for UML and MDA related research activities. In addition, quite a number of Fujaba users send requests for more functionality and extensions. Therefore, the 8th International Fujaba Days aimed at bringing together Fujaba develop- ers and Fujaba users from all over the world to present their ideas and projects and to discuss them with each other and with the Fujaba core development team.
Resumo:
Accurate data of the natural conditions and agricultural systems with a good spatial resolution are a key factor to tackle food insecurity in developing countries. A broad variety of approaches exists to achieve precise data and information about agriculture. One system, especially developed for smallholder agriculture in East Africa, is the Farm Management Handbook of Kenya. It was first published in 1982/83 and fully revised in 2012, now containing 7 volumes. The handbooks contain detailed information on climate, soils, suitable crops and soil care based on scientific research results of the last 30 years. The density of facts leads to time consuming extraction of all necessary information. In this study we analyse the user needs and necessary components of a system for decision support for smallholder farming in Kenya based on a geographical information system (GIS). Required data sources were identified, as well as essential functions of the system. We analysed the results of our survey conducted in 2012 and early 2013 among agricultural officers. The monitoring of user needs and the problem of non-adaptability of an agricultural information system on the level of extension officers in Kenya are the central objectives. The outcomes of the survey suggest the establishment of a decision support tool based on already available open source GIS components. The system should include functionalities to show general information for a specific location and should provide precise recommendations about suitable crops and management options to support agricultural guidance on farm level.
Resumo:
Das hier frei verfügbare Skript gehört zu einer gleichnamigen Vorlesung, die von Prof. Dr. Lutz Wegner bis zum Wintersemester 1998/99 am damaligen Fachbereich 17 Mathematik/Informatik der Universität Kassel gehalten wurde. Thema ist die Einführung in die Programmierung, wie sie am Anfang fast aller Informatik-, Mathematik- und verwandter Ingenieurstudiengänge steht. Hier erfolgt die Einführung mit der Programmiersprache Pascal, die Niklaus Wirth (ehemals ETH Zürich) bereits 1968 entwickelte. Sie gilt als letzte Vertreterin der rein prozeduralen Sprachen und führt in der Regel zu sauber strukturierten Programmen. In der damals auf PCs weit verbreiteten Turbo Pascal Variante geht es auch um Objektorientierung, die charakteristisch für das heutige Programmierparadigma mit Java ist. Alte (und neu geschriebene) Pascal-Programme lassen sich problemlos mit den Free Pascal Open Source Compilern (www.freepascal.org) übersetzen und unter allen gängigen Betriebssystemen zur Ausführung bringen. Wer hierfür eine fachlich präzise und trotzdem vergleichsweise gut lesbare Einführung mit Hinweisen auf guten und schlechten Programmierstil braucht, wird hier fündig und kommt über den Stickwortindex am Ende auch schnell zu Einzelthemen wie Parameterübergabe oder das Arbeiten mit Pointern.
Resumo:
The goal of the work reported here is to capture the commonsense knowledge of non-expert human contributors. Achieving this goal will enable more intelligent human-computer interfaces and pave the way for computers to reason about our world. In the domain of natural language processing, it will provide the world knowledge much needed for semantic processing of natural language. To acquire knowledge from contributors not trained in knowledge engineering, I take the following four steps: (i) develop a knowledge representation (KR) model for simple assertions in natural language, (ii) introduce cumulative analogy, a class of nearest-neighbor based analogical reasoning algorithms over this representation, (iii) argue that cumulative analogy is well suited for knowledge acquisition (KA) based on a theoretical analysis of effectiveness of KA with this approach, and (iv) test the KR model and the effectiveness of the cumulative analogy algorithms empirically. To investigate effectiveness of cumulative analogy for KA empirically, Learner, an open source system for KA by cumulative analogy has been implemented, deployed, and evaluated. (The site "1001 Questions," is available at http://teach-computers.org/learner.html). Learner acquires assertion-level knowledge by constructing shallow semantic analogies between a KA topic and its nearest neighbors and posing these analogies as natural language questions to human contributors. Suppose, for example, that based on the knowledge about "newspapers" already present in the knowledge base, Learner judges "newspaper" to be similar to "book" and "magazine." Further suppose that assertions "books contain information" and "magazines contain information" are also already in the knowledge base. Then Learner will use cumulative analogy from the similar topics to ask humans whether "newspapers contain information." Because similarity between topics is computed based on what is already known about them, Learner exhibits bootstrapping behavior --- the quality of its questions improves as it gathers more knowledge. By summing evidence for and against posing any given question, Learner also exhibits noise tolerance, limiting the effect of incorrect similarities. The KA power of shallow semantic analogy from nearest neighbors is one of the main findings of this thesis. I perform an analysis of commonsense knowledge collected by another research effort that did not rely on analogical reasoning and demonstrate that indeed there is sufficient amount of correlation in the knowledge base to motivate using cumulative analogy from nearest neighbors as a KA method. Empirically, evaluating the percentages of questions answered affirmatively, negatively and judged to be nonsensical in the cumulative analogy case compares favorably with the baseline, no-similarity case that relies on random objects rather than nearest neighbors. Of the questions generated by cumulative analogy, contributors answered 45% affirmatively, 28% negatively and marked 13% as nonsensical; in the control, no-similarity case 8% of questions were answered affirmatively, 60% negatively and 26% were marked as nonsensical.
Resumo:
Memory errors are a common cause of incorrect software execution and security vulnerabilities. We have developed two new techniques that help software continue to execute successfully through memory errors: failure-oblivious computing and boundless memory blocks. The foundation of both techniques is a compiler that generates code that checks accesses via pointers to detect out of bounds accesses. Instead of terminating or throwing an exception, the generated code takes another action that keeps the program executing without memory corruption. Failure-oblivious code simply discards invalid writes and manufactures values to return for invalid reads, enabling the program to continue its normal execution path. Code that implements boundless memory blocks stores invalid writes away in a hash table to return as the values for corresponding out of bounds reads. he net effect is to (conceptually) give each allocated memory block unbounded size and to eliminate out of bounds accesses as a programming error. We have implemented both techniques and acquired several widely used open source servers (Apache, Sendmail, Pine, Mutt, and Midnight Commander).With standard compilers, all of these servers are vulnerable to buffer overflow attacks as documented at security tracking web sites. Both failure-oblivious computing and boundless memory blocks eliminate these security vulnerabilities (as well as other memory errors). Our results show that our compiler enables the servers to execute successfully through buffer overflow attacks to continue to correctly service user requests without security vulnerabilities.
Resumo:
Compositional data naturally arises from the scientific analysis of the chemical composition of archaeological material such as ceramic and glass artefacts. Data of this type can be explored using a variety of techniques, from standard multivariate methods such as principal components analysis and cluster analysis, to methods based upon the use of log-ratios. The general aim is to identify groups of chemically similar artefacts that could potentially be used to answer questions of provenance. This paper will demonstrate work in progress on the development of a documented library of methods, implemented using the statistical package R, for the analysis of compositional data. R is an open source package that makes available very powerful statistical facilities at no cost. We aim to show how, with the aid of statistical software such as R, traditional exploratory multivariate analysis can easily be used alongside, or in combination with, specialist techniques of compositional data analysis. The library has been developed from a core of basic R functionality, together with purpose-written routines arising from our own research (for example that reported at CoDaWork'03). In addition, we have included other appropriate publicly available techniques and libraries that have been implemented in R by other authors. Available functions range from standard multivariate techniques through to various approaches to log-ratio analysis and zero replacement. We also discuss and demonstrate a small selection of relatively new techniques that have hitherto been little-used in archaeometric applications involving compositional data. The application of the library to the analysis of data arising in archaeometry will be demonstrated; results from different analyses will be compared; and the utility of the various methods discussed
Resumo:
Los proyectos y productos comúnmente llamados Free and Open Source Software1 relacionados con la geomática están experimentando una evolución y actualización vertiginosa. A los “tradicionales” proyectos de servicios de mapas, bases de datos espaciales o clientes pesados, se les están uniendo un amplio conjunto de componentes como servicios de publicación, clientes ligeros, servicios de geoprocesamiento, movilidad, frameworks, …o nuevos estándares como GeoRSS, WMS Tiled, WPS,… Este artículo pretende efectuar una breve pausa para analizar el panorama actual del mundo del software libre, categorizando los proyectos y productos existentes en la actualidad, para valorar cada uno de ellos, analizando su situación actual, su trayectoria, su evolución futura y las interrelaciones existentes en el ecosistema de software libre SIG. Se analizará la situación y el catálogo disponible de proyectos/productos de servidores de datos espaciales, servidores OGC , publicación/clientes de mapas ligeros, aplicaciones de escritorio, clientes IDE, bibliotecas de desarrollo, herramientas de catálogo cliente y servidor, etc.. Se mostrará el ecosistema de proyectos, organizaciones y personas que colaboran con los principales productos, con sus interrelaciones entre sí, y los planes de futuro conocidos. El resultado esperado es mostrar al lector una imagen general (“big-picture”) que le permita posicionar sus necesidades con criterio dentro del panorama actual de las soluciones SIG basadas en software libre
Resumo:
Aquest projecte s'ha dut a terme amb el Grup de visió per computador del departament d'Arquitectura i Tecnologia de Computadors (ATC) de la Universitat de Girona. Està enfocat a l'anàlisi d'imatges mèdiques, en concret s'analitzaran imatges de pròstata en relació a desenvolupaments que s'estan realitzant en el grup de visió esmentat. Els objectius fixats per aquest projecte són desenvolupar dos mòduls de processamentm d'imatges els quals afrontaran dos blocs important en el tractament d'imatges, aquests dos mòduls seran un pre-processat d'imatges, que constarà de tres filtres i un bloc de segmentació per tal de cercar la pròstata dintre de les imatges a tractar. En el projecte es treballarà amb el llenguatge de programació C++, concretament amb unes llibreries que es denominen ITK (Insight Toolkit ) i són open source enfocades al tractament d'imatges mèdiques. A part d'aquesta eina s'utilitzaran d'altres com les Qt que és una biblioteca d'eines per crear entorns gràfics
Resumo:
En este documento se analizan los conceptos clave relacionados con el software libre: ¿qué es el software libre? ¿qué es el software propietario? ¿qué es el software gratuito? ¿qué es una licencia copyleft? A continuación se muestra cómo las aplicaciones horizontales libres se desarrollan de forma distinta a las verticales libres, entre las que se encuentran los SIG. Seguidamente se detallan las características de los sistemas SIG, centrándose en los SIG libres. El hecho de ser una aplicación vertical condiciona y explica buena parte de las particularidades de los SIG libres. Estas particularidades se analizan a partir de impresiones de profesionales de los SIG. Todo ello lleva a la conclusión que los SIG libres no pueden depender sólo de una comunidad de desarrolladores y necesitan un respaldo económico importante
Resumo:
gvSIG es probablemente el proyecto relacionado con la información geográfica que más ha dado que hablar desde su aparición, en el año 2004, habiéndose convertido en un referente dentro de los SIG, tanto libres como propietarios, y alcanzando una gran difusión en diversos países. gvSIG es un proyecto que persigue convertirse en una aplicación integradora, unificando mundos como el del CAD y el SIG, el SIG vectorial y el SIG raster, integrando el trabajo en el local con las Intraestructuras de Datos Espaciales, las dos dimensiones con el 3D y el 4D, ... y en definitiva construyendo una herramienta que pueda dar servicio al amplio abanico de usuarios de la información geográfica
Resumo:
Presentació del projecte gvSIG, el sistema d'informació geogràfica en software lliure aplicat a la Generalitat Valenciana