984 resultados para Open-source code


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nous proposons une approche basée sur la formulation interactive des requêtes. Notre approche sert à faciliter des tâches d’analyse et de compréhension du code source. Dans cette approche, l’analyste utilise un ensemble de filtres de base (linguistique, structurel, quantitatif, et filtre d’interactivité) pour définir des requêtes complexes. Ces requêtes sont construites à l’aide d’un processus interactif et itératif, où des filtres de base sont choisis et exécutés, et leurs résultats sont visualisés, changés et combinés en utilisant des opérateurs prédéfinis. Nous avons évalués notre approche par l’implantation des récentes contributions en détection de défauts de conception ainsi que la localisation de fonctionnalités dans le code. Nos résultats montrent que, en plus d’être générique, notre approche aide à la mise en œuvre des solutions existantes implémentées par des outils automatiques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionnellement, les applications orientées objets légataires intègrent différents aspects fonctionnels. Ces aspects peuvent être dispersés partout dans le code. Il existe différents types d’aspects : • des aspects qui représentent des fonctionnalités métiers ; • des aspects qui répondent à des exigences non fonctionnelles ou à d’autres considérations de conception comme la robustesse, la distribution, la sécurité, etc. Généralement, le code qui représente ces aspects chevauche plusieurs hiérarchies de classes. Plusieurs chercheurs se sont intéressés à la problématique de la modularisation de ces aspects dans le code : programmation orientée sujets, programmation orientée aspects et programmation orientée vues. Toutes ces méthodes proposent des techniques et des outils pour concevoir des applications orientées objets sous forme de composition de fragments de code qui répondent à différents aspects. La séparation des aspects dans le code a des avantages au niveau de la réutilisation et de la maintenance. Ainsi, il est important d’identifier et de localiser ces aspects dans du code légataire orienté objets. Nous nous intéressons particulièrement aux aspects fonctionnels. En supposant que le code qui répond à un aspect fonctionnel ou fonctionnalité exhibe une certaine cohésion fonctionnelle (dépendances entre les éléments), nous proposons d’identifier de telles fonctionnalités à partir du code. L’idée est d’identifier, en l’absence des paradigmes de la programmation par aspects, les techniques qui permettent l’implémentation des différents aspects fonctionnels dans un code objet. Notre approche consiste à : • identifier les techniques utilisées par les développeurs pour intégrer une fonctionnalité en l’absence des techniques orientées aspects • caractériser l’empreinte de ces techniques sur le code • et développer des outils pour identifier ces empreintes. Ainsi, nous présentons deux approches pour l’identification des fonctionnalités existantes dans du code orienté objets. La première identifie différents patrons de conception qui permettent l’intégration de ces fonctionnalités dans le code. La deuxième utilise l’analyse formelle de concepts pour identifier les fonctionnalités récurrentes dans le code. Nous expérimentons nos deux approches sur des systèmes libres orientés objets pour identifier les différentes fonctionnalités dans le code. Les résultats obtenus montrent l’efficacité de nos approches pour identifier les différentes fonctionnalités dans du code légataire orienté objets et permettent de suggérer des cas de refactorisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La révision du code est un procédé essentiel quelque soit la maturité d'un projet; elle cherche à évaluer la contribution apportée par le code soumis par les développeurs. En principe, la révision du code améliore la qualité des changements de code (patches) avant qu'ils ne soient validés dans le repertoire maître du projet. En pratique, l'exécution de ce procédé n'exclu pas la possibilité que certains bugs passent inaperçus. Dans ce document, nous présentons une étude empirique enquétant la révision du code d'un grand projet open source. Nous investissons les relations entre les inspections des reviewers et les facteurs, sur les plans personnel et temporel, qui pourraient affecter la qualité de telles inspections.Premiérement, nous relatons une étude quantitative dans laquelle nous utilisons l'algorithme SSZ pour détecter les modifications et les changements de code favorisant la création de bogues (bug-inducing changes) que nous avons lié avec l'information contenue dans les révisions de code (code review information) extraites du systéme de traçage des erreurs (issue tracking system). Nous avons découvert que les raisons pour lesquelles les réviseurs manquent certains bogues était corrélées autant à leurs caractéristiques personnelles qu'aux propriétés techniques des corrections en cours de revue. Ensuite, nous relatons une étude qualitative invitant les développeurs de chez Mozilla à nous donner leur opinion concernant les attributs favorables à la bonne formulation d'une révision de code. Les résultats de notre sondage suggèrent que les développeurs considèrent les aspects techniques (taille de la correction, nombre de chunks et de modules) autant que les caractéristiques personnelles (l'expérience et review queue) comme des facteurs influant fortement la qualité des revues de code.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measurements of the ionospheric E-region during total solar eclipses have been used to provide information about the evolution of the solar magnetic field and EUV and X-ray emissions from the solar corona and chromosphere. By measuring levels of ionisation during an eclipse and comparing these measurements with an estimate of the unperturbed ionisation levels (such as those made during a control day, where available) it is possible to estimate the percentage of ionising radiation being emitted by the solar corona and chromosphere. Previously unpublished data from the two eclipses presented here are particularly valuable as they provide information that supplements the data published to date. The eclipse of 23 October 1976 over Australia provides information in a data gap that would otherwise have spanned the years 1966 to 1991. The eclipse of 4 December 2002 over Southern Africa is important as it extends the published sequence of measurements. Comparing measurements from eclipses between 1932 and 2002 with the solar magnetic source flux reveals that changes in the solar EUV and X-ray flux lag the open source flux measurements by approximately 1.5 years. We suggest that this unexpected result comes about from changes to the relative size of the limb corona between eclipses, with the lag representing the time taken to populate the coronal field with plasma hot enough to emit the EUV and X-rays ionising our atmosphere.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

KIVA is an open Computational Fluid Dynamics (CFD) source code that is capable to compute the transient two and three-dimensional chemically reactive fluid flows with spray. The latest version in the family of KIVA codes is the KIVA-4 which is capable of handling the unstructured mesh. This project focuses on the implementation of the Conjugate Heat Transfer code (CHT) in KIVA-4. The previous version of KIVA code with conjugate heat transfer code has been developed at Michigan Technological University by Egel Urip and is be used in this project. During the first phase of the project, the difference in the code structure between the previous version of KIVA and the KIVA-4 has been studied, which is the most challenging part of the project. The second phase involves the reverse engineering where the CHT code in previous version is extracted and implemented in KIVA-4 according to the new code structure. The validation of the implemented code is performed using a 4-valve Pentroof engine case. A solid cylinder wall has been developed using GRIDGEN which surrounds 3/4th of the engine cylinder and heat transfer to the solid wall during one engine cycle (0-720 Crank Angle Degree) is compared with that of the reference result. The reference results are nothing but the same engine case run in the previous version with the original code developed by Egel. The results of current code are very much comparable to that of the reference results which verifies that successful implementation of the CHT code in KIVA-4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Detecting bugs as early as possible plays an important role in ensuring software quality before shipping. We argue that mining previous bug fixes can produce good knowledge about why bugs happen and how they are fixed. In this paper, we mine the change history of 717 open source projects to extract bug-fix patterns. We also manually inspect many of the bugs we found to get insights into the contexts and reasons behind those bugs. For instance, we found out that missing null checks and missing initializations are very recurrent and we believe that they can be automatically detected and fixed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Open innovation is increasingly being adopted in business and describes a situation in which firms exchange ideas and knowledge with external participants, such as customers, suppliers, partner firms, and universities. This article extends the concept of open innovation with a push model of open innovation: knowledge is voluntarily created outside a firm by individuals and organisations who proceed to push knowledge into a firm’s open innovation project. For empirical analysis, we examine source code and newsgroup data on the Eclipse Development Platform. We find that outsiders invest as much in the firm’s project as the founding firm itself. Based on the insights from Eclipse, we develop four propositions: ‘preemptive generosity’ of a firm, ‘continuous commitment’, ‘adaptive governance structure’, and ‘low entry barrier’ are contexts that enable the push model of open innovation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Carbon (C) and nitrogen (N) process-based models are important tools for estimating and reporting greenhouse gas emissions and changes in soil C stocks. There is a need for continuous evaluation, development and adaptation of these models to improve scientific understanding, national inventories and assessment of mitigation options across the world. To date, much of the information needed to describe different processes like transpiration, photosynthesis, plant growth and maintenance, above and below ground carbon dynamics, decomposition and nitrogen mineralization. In ecosystem models remains inaccessible to the wider community, being stored within model computer source code, or held internally by modelling teams. Here we describe the Global Research Alliance Modelling Platform (GRAMP), a web-based modelling platform to link researchers with appropriate datasets, models and training material. It will provide access to model source code and an interactive platform for researchers to form a consensus on existing methods, and to synthesize new ideas, which will help to advance progress in this area. The platform will eventually support a variety of models, but to trial the platform and test the architecture and functionality, it was piloted with variants of the DNDC model. The intention is to form a worldwide collaborative network (a virtual laboratory) via an interactive website with access to models and best practice guidelines; appropriate datasets for testing, calibrating and evaluating models; on-line tutorials and links to modelling and data provider research groups, and their associated publications. A graphical user interface has been designed to view the model development tree and access all of the above functions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In Model-Driven Engineering (MDE), the developer creates a model using a language such as Unified Modeling Language (UML) or UML for Real-Time (UML-RT) and uses tools such as Papyrus or Papyrus-RT that generate code for them based on the model they create. Tracing allows developers to get insights such as which events occur and timing information into their own application as it runs. We try to add monitoring capabilities using Linux Trace Toolkit: next generation (LTTng) to models created in UML-RT using Papyrus-RT. The implementation requires changing the code generator to add tracing statements for the events that the user wants to monitor to the generated code. We also change the makefile to automate the build process and we create an Extensible Markup Language (XML) file that allows developers to view their traces visually using Trace Compass, an Eclipse-based trace viewing tool. Finally, we validate our results using three models we create and trace.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a vision that allows the combined use of model-driven engineering, run-time monitoring, and animation for the development and analysis of components in real-time embedded systems. Key building block in the tool environment supporting this vision is a highly-customizable code generation process. Customization is performed via a configuration specification which describes the ways in which input is provided to the component, the ways in which run-time execution information can be observed, and how these observations drive animation tools. The environment is envisioned to be suitable for different activities ranging from quality assurance to supporting certification, teaching, and outreach and will be built exclusively with open source tools to increase impact. A preliminary prototype implementation is described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Context: Obfuscation is a common technique used to protect software against mali- cious reverse engineering. Obfuscators manipulate the source code to make it harder to analyze and more difficult to understand for the attacker. Although different ob- fuscation algorithms and implementations are available, they have never been directly compared in a large scale study. Aim: This paper aims at evaluating and quantifying the effect of several different obfuscation implementations (both open source and commercial), to help developers and project manager to decide which one could be adopted. Method: In this study we applied 44 obfuscations to 18 subject applications covering a total of 4 millions lines of code. The effectiveness of these source code obfuscations has been measured using 10 code metrics, considering modularity, size and complexity of code. Results: Results show that some of the considered obfuscations are effective in mak- ing code metrics change substantially from original to obfuscated code, although this change (called potency of the obfuscation) is different on different metrics. In the pa- per we recommend which obfuscations to select, given the security requirements of the software to be protected.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Graphical User Interface (GUI) is an integral component of contemporary computer software. A stable and reliable GUI is necessary for correct functioning of software applications. Comprehensive verification of the GUI is a routine part of most software development life-cycles. The input space of a GUI is typically large, making exhaustive verification difficult. GUI defects are often revealed by exercising parts of the GUI that interact with each other. It is challenging for a verification method to drive the GUI into states that might contain defects. In recent years, model-based methods, that target specific GUI interactions, have been developed. These methods create a formal model of the GUI’s input space from specification of the GUI, visible GUI behaviors and static analysis of the GUI’s program-code. GUIs are typically dynamic in nature, whose user-visible state is guided by underlying program-code and dynamic program-state. This research extends existing model-based GUI testing techniques by modelling interactions between the visible GUI of a GUI-based software and its underlying program-code. The new model is able to, efficiently and effectively, test the GUI in ways that were not possible using existing methods. The thesis is this: Long, useful GUI testcases can be created by examining the interactions between the GUI, of a GUI-based application, and its program-code. To explore this thesis, a model-based GUI testing approach is formulated and evaluated. In this approach, program-code level interactions between GUI event handlers will be examined, modelled and deployed for constructing long GUI testcases. These testcases are able to drive the GUI into states that were not possible using existing models. Implementation and evaluation has been conducted using GUITAR, a fully-automated, open-source GUI testing framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Code patterns, including programming patterns and design patterns, are good references for programming language feature improvement and software re-engineering. However, to our knowledge, no existing research has attempted to detect code patterns based on code clone detection technology. In this study, we build upon the previous work and propose to detect and analyze code patterns from a collection of open source projects using NiPAT technology. Because design patterns are most closely associated with object-oriented languages, we choose Java and Python projects to conduct our study. The tool we use for detecting patterns is NiPAT, a pattern detecting tool originally developed for the TXL programming language based on the NiCad clone detector. We extend NiPAT for the Java and Python programming languages. Then, we try to identify all the patterns from the pattern report and classify them into several different categories. In the end of the study, we analyze all the patterns and compare the differences between Java and Python patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The availability of a huge amount of source code from code archives and open-source projects opens up the possibility to merge machine learning, programming languages, and software engineering research fields. This area is often referred to as Big Code where programming languages are treated instead of natural languages while different features and patterns of code can be exploited to perform many useful tasks and build supportive tools. Among all the possible applications which can be developed within the area of Big Code, the work presented in this research thesis mainly focuses on two particular tasks: the Programming Language Identification (PLI) and the Software Defect Prediction (SDP) for source codes. Programming language identification is commonly needed in program comprehension and it is usually performed directly by developers. However, when it comes at big scales, such as in widely used archives (GitHub, Software Heritage), automation of this task is desirable. To accomplish this aim, the problem is analyzed from different points of view (text and image-based learning approaches) and different models are created paying particular attention to their scalability. Software defect prediction is a fundamental step in software development for improving quality and assuring the reliability of software products. In the past, defects were searched by manual inspection or using automatic static and dynamic analyzers. Now, the automation of this task can be tackled using learning approaches that can speed up and improve related procedures. Here, two models have been built and analyzed to detect some of the commonest bugs and errors at different code granularity levels (file and method levels). Exploited data and models’ architectures are analyzed and described in detail. Quantitative and qualitative results are reported for both PLI and SDP tasks while differences and similarities concerning other related works are discussed.