970 resultados para Open Source License


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dannie Jost gave a presentation outlining some of the challenges to the patent system presented by open source hardware at the "Open Knowledge Festival", under the topic stream treating open design, hardware, manufacturing and making; September 19, 2012; Helsinki, Finland. This topic stream generated considerable discussion, and it serves to educate an audience that is usually very adverse to patents and copyright, and helps the researcher understand the issuing conflicts surrounding emerging technologies, in particular digital technologies, and the maker movement (digitally enabled).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Starting from the Schumpeterian producer-driven understanding of innovation, followed by user-generated solutions and understanding of collaborative forms of co-creation, scholars investigated the drivers and the nature of interactions underpinning success in various ways. Innovation literature has gone a long way, where open innovation has attracted researchers to investigate problems like compatibilities of external resources, networks of innovation, or open source collaboration. Openness itself has gained various shades in the different strands of literature. In this paper the author provides with an overview and a draft evaluation of the different models of open innovation, illustrated with some empirical findings from various fields drawn from the literature. She points to the relevance of transaction costs affecting viable forms of (open) innovation strategies of firms, and the importance to define the locus of innovation for further analyses of different firm and interaction level formations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

FDI is believed to be a conduit of new technologies between countries. The first chapter of this dissertation studies the advantages of outward FDI for the home country of multinationals conducting research and development abroad. We use patent citations as a proxy for technology spillovers and we bring empirical evidence that supports the hypothesis that a U.S. subsidiary conducting research and development overseas facilitates the flow of knowledge between its host and home countries.^ The second chapter examines the impact of intellectual property rights (IPR) reforms on the technology flows between the U.S. and host countries of U.S. affiliates. We again use patent citations to examine whether the diffusion of new technology between the host countries and the U.S. is accelerated by the reforms. Our results suggest that the reforms favor innovative efforts of domestic firms in the reforming countries rather than U.S. affiliates efforts. In other words, reforms mediate the technology flows from the U.S. to the reforming countries.^ The third chapter deals with another form of IPR, open source (OS) licenses. These differ in the conditions under which licensors and OS contributors are allowed to modify and redistribute the source code. We measure OS project quality by the speed with which programming bugs are fixed and test whether the license chosen by project leaders influences bug resolution rates. In initial regressions, we find a strong correlation between the hazard of bug resolution and the use of highly restrictive licenses. However, license choices are likely to be endogenous. We instrument license choice using (i) the human language in which contributors operate and (ii) the license choice of the project leaders for a previous project. We then find weak evidence that restrictive licenses adversely affect project success.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Over the past few years, logging has evolved from from simple printf statements to more complex and widely used logging libraries. Today logging information is used to support various development activities such as fixing bugs, analyzing the results of load tests, monitoring performance and transferring knowledge. Recent research has examined how to improve logging practices by informing developers what to log and where to log. Furthermore, the strong dependence on logging has led to the development of logging libraries that have reduced the intricacies of logging, which has resulted in an abundance of log information. Two recent challenges have emerged as modern software systems start to treat logging as a core aspect of their software. In particular, 1) infrastructural challenges have emerged due to the plethora of logging libraries available today and 2) processing challenges have emerged due to the large number of log processing tools that ingest logs and produce useful information from them. In this thesis, we explore these two challenges. We first explore the infrastructural challenges that arise due to the plethora of logging libraries available today. As systems evolve, their logging infrastructure has to evolve (commonly this is done by migrating to new logging libraries). We explore logging library migrations within Apache Software Foundation (ASF) projects. We i find that close to 14% of the pro jects within the ASF migrate their logging libraries at least once. For processing challenges, we explore the different factors which can affect the likelihood of a logging statement changing in the future in four open source systems namely ActiveMQ, Camel, Cloudstack and Liferay. Such changes are likely to negatively impact the log processing tools that must be updated to accommodate such changes. We find that 20%-45% of the logging statements within the four systems are changed at least once. We construct random forest classifiers and Cox models to determine the likelihood of both just-introduced and long-lived logging statements changing in the future. We find that file ownership, developer experience, log density and SLOC are important factors in determining the stability of logging statements.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In Model-Driven Engineering (MDE), the developer creates a model using a language such as Unified Modeling Language (UML) or UML for Real-Time (UML-RT) and uses tools such as Papyrus or Papyrus-RT that generate code for them based on the model they create. Tracing allows developers to get insights such as which events occur and timing information into their own application as it runs. We try to add monitoring capabilities using Linux Trace Toolkit: next generation (LTTng) to models created in UML-RT using Papyrus-RT. The implementation requires changing the code generator to add tracing statements for the events that the user wants to monitor to the generated code. We also change the makefile to automate the build process and we create an Extensible Markup Language (XML) file that allows developers to view their traces visually using Trace Compass, an Eclipse-based trace viewing tool. Finally, we validate our results using three models we create and trace.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE: Radiation therapy is used to treat cancer using carefully designed plans that maximize the radiation dose delivered to the target and minimize damage to healthy tissue, with the dose administered over multiple occasions. Creating treatment plans is a laborious process and presents an obstacle to more frequent replanning, which remains an unsolved problem. However, in between new plans being created, the patient's anatomy can change due to multiple factors including reduction in tumor size and loss of weight, which results in poorer patient outcomes. Cloud computing is a newer technology that is slowly being used for medical applications with promising results. The objective of this work was to design and build a system that could analyze a database of previously created treatment plans, which are stored with their associated anatomical information in studies, to find the one with the most similar anatomy to a new patient. The analyses would be performed in parallel on the cloud to decrease the computation time of finding this plan. METHODS: The system used SlicerRT, a radiation therapy toolkit for the open-source platform 3D Slicer, for its tools to perform the similarity analysis algorithm. Amazon Web Services was used for the cloud instances on which the analyses were performed, as well as for storage of the radiation therapy studies and messaging between the instances and a master local computer. A module was built in SlicerRT to provide the user with an interface to direct the system on the cloud, as well as to perform other related tasks. RESULTS: The cloud-based system out-performed previous methods of conducting the similarity analyses in terms of time, as it analyzed 100 studies in approximately 13 minutes, and produced the same similarity values as those methods. It also scaled up to larger numbers of studies to analyze in the database with a small increase in computation time of just over 2 minutes. CONCLUSION: This system successfully analyzes a large database of radiation therapy studies and finds the one that is most similar to a new patient, which represents a potential step forward in achieving feasible adaptive radiation therapy replanning.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Model Driven Engineering uses the principle that code can automatically be generated from software models which would potentially save time and cost of development. By this methodology, a systems structure and behaviour can be expressed in more abstract, high level terms without some of the accidental complexity that the use of a general purpose language can bring. Models are the actual implementation of the system unlike in traditional software development where models are often used for documentation purposes only. However once the code is generated from the model, testing and debugging activities tend to happen on the code level and the model is not updated. We believe that monitoring on the model level could potentially facilitate quality assurance activities as the errors are detected in the early phase of development. In this thesis, we create a Monitoring Configuration for an open source model driven engineering tool called PapyrusRT in Eclipse. We support the run-time monitoring of UML-RT elements with a tracing tool called LTTng. We annotate the model with monitoring information to be used by the code generator for adding tracepoint statements for the corresponding elements. We provide the option of a timing specification to discover latency errors on the model. We validate the results by creating and tracing real time models in PapyrusRT.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modern software applications are becoming more dependent on database management systems (DBMSs). DBMSs are usually used as black boxes by software developers. For example, Object-Relational Mapping (ORM) is one of the most popular database abstraction approaches that developers use nowadays. Using ORM, objects in Object-Oriented languages are mapped to records in the database, and object manipulations are automatically translated to SQL queries. As a result of such conceptual abstraction, developers do not need deep knowledge of databases; however, all too often this abstraction leads to inefficient and incorrect database access code. Thus, this thesis proposes a series of approaches to improve the performance of database-centric software applications that are implemented using ORM. Our approaches focus on troubleshooting and detecting inefficient (i.e., performance problems) database accesses in the source code, and we rank the detected problems based on their severity. We first conduct an empirical study on the maintenance of ORM code in both open source and industrial applications. We find that ORM performance-related configurations are rarely tuned in practice, and there is a need for tools that can help improve/tune the performance of ORM-based applications. Thus, we propose approaches along two dimensions to help developers improve the performance of ORM-based applications: 1) helping developers write more performant ORM code; and 2) helping developers configure ORM configurations. To provide tooling support to developers, we first propose static analysis approaches to detect performance anti-patterns in the source code. We automatically rank the detected anti-pattern instances according to their performance impacts. Our study finds that by resolving the detected anti-patterns, the application performance can be improved by 34% on average. We then discuss our experience and lessons learned when integrating our anti-pattern detection tool into industrial practice. We hope our experience can help improve the industrial adoption of future research tools. However, as static analysis approaches are prone to false positives and lack runtime information, we also propose dynamic analysis approaches to further help developers improve the performance of their database access code. We propose automated approaches to detect redundant data access anti-patterns in the database access code, and our study finds that resolving such redundant data access anti-patterns can improve application performance by an average of 17%. Finally, we propose an automated approach to tune performance-related ORM configurations using both static and dynamic analysis. Our study shows that our approach can help improve application throughput by 27--138%. Through our case studies on real-world applications, we show that all of our proposed approaches can provide valuable support to developers and help improve application performance significantly.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Available under the GNU Lesser General Public License (LGPL3)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Teknikutvecklingen går snabbt framåt, och idag finns det ett stort behov av att använda andra utvecklares kod för att hänga med i det höga tempot. De kallas samlat för ramverk eller bibliotek, och hjälper utvecklaren att på ett effektivare sätt ta sig från start till mål utan att behöva skriva all programmeringskod själv. Dessa tredjepartslösningar är nästintill alltid bundna till ett licensavtal, vars restriktioner och tillåtelser utvecklaren måste följa vid nyttjandet. I denna studie har vi undersökt hur medvetenheten ser ut kring de licenser som är bundna till dessa tredjepartslösningar. Då det framkom i vår förstudie att vårt fall hade en relativt låg medvetenhet har vi även valt att titta på hur medvetenheten kan ökas. För att genomföra detta har vi valt att intervjua utvecklare och projektledare på ett konsultföretag. Vi undersökte även vilka faktorer som är viktiga för att höja medvetenheten samt vilka konsekvenser som kunde uppkomma vid bristfällande licenshantering. Vi upptäckte att det var en bristfällig kunskap om tredjepartslicenser på det studerade företaget, och hur de följde de licensrestriktioner som fanns för respektive licens. För att höja medvetenheten föreslår vi hjälpmedel i form av en automatiserad centraliserad lösning, lathundar för en enklare överblick av licensavtalen och hur användning av redan färdiga programvaror kan hjälpa till att öka medvetenheten och hanteringen av licenser.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract: Decision support systems have been widely used for years in companies to gain insights from internal data, thus making successful decisions. Lately, thanks to the increasing availability of open data, these systems are also integrating open data to enrich decision making process with external data. On the other hand, within an open-data scenario, decision support systems can be also useful to decide which data should be opened, not only by considering technical or legal constraints, but other requirements, such as "reusing potential" of data. In this talk, we focus on both issues: (i) open data for decision making, and (ii) decision making for opening data. We will first briefly comment some research problems regarding using open data for decision making. Then, we will give an outline of a novel decision-making approach (based on how open data is being actually used in open-source projects hosted in Github) for supporting open data publication. Bio of the speaker: Jose-Norberto Mazón holds a PhD from the University of Alicante (Spain). He is head of the "Cátedra Telefónica" on Big Data and coordinator of the Computing degree at the University of Alicante. He is also member of the WaKe research group at the University of Alicante. His research work focuses on open data management, data integration and business intelligence within "big data" scenarios, and their application to the tourism domain (smart tourism destinations). He has published his research in international journals, such as Decision Support Systems, Information Sciences, Data & Knowledge Engineering or ACM Transaction on the Web. Finally, he is involved in the open data project in the University of Alicante, including its open data portal at http://datos.ua.es

Relevância:

90.00% 90.00%

Publicador:

Resumo:

FEA simulation of thermal metal cutting is central to interactive design and manufacturing. It is therefore relevant to assess the applicability of FEA open software to simulate 2D heat transfer in metal sheet laser cuts. Application of open source code (e.g. FreeFem++, FEniCS, MOOSE) makes possible additional scenarios (e.g. parallel, CUDA, etc.), with lower costs. However, a precise assessment is required on the scenarios in which open software can be a sound alternative to a commercial one. This article contributes in this regard, by presenting a comparison of the aforementioned freeware FEM software for the simulation of heat transfer in thin (i.e. 2D) sheets, subject to a gliding laser point source. We use the commercial ABAQUS software as the reference to compare such open software. A convective linear thin sheet heat transfer model, with and without material removal is used. This article does not intend a full design of computer experiments. Our partial assessment shows that the thin sheet approximation turns to be adequate in terms of the relative error for linear alumina sheets. Under mesh resolutions better than 10e−5 m , the open and reference software temperature differ in at most 1 % of the temperature prediction. Ongoing work includes adaptive re-meshing, nonlinearities, sheet stress analysis and Mach (also called ‘relativistic’) effects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MAT-LAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/(Mentaschi et al., 2016).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper deals with the interaction between fictitious capital and the neoliberal model of growth and distribution, inspired by the classical economic tradition. Our renewed interest in this literature has a close connection with the recent international crisis in the capitalist economy. However, this discussion takes as its point of departure the fact that standard economic theory teaches that financial capital, in this world of increasing globalization, leads to new investment opportunities which improve levels of growth, employment, income distribution, and equilibrium. Accordingly, it is said that such financial resources expand the welfare of people and countries worldwide. Here we examine some illusions and paradoxes of such a paradigm. We show some theoretical and empirical consequences of this vision, which are quite different and have harmful constraints.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D, Computing) -- Queen's University, 2016-09-30 09:55:51.506