7 resultados para open source code
em Aston University Research Archive
Resumo:
Extensible Business Reporting Language (XBRL) is being adopted by European regulators as a data standard for the exchange of business information. This paper examines the approach of XBRL International (XII) to the meta-data standard's development and diffusion. We theorise the development of XBRL using concepts drawn from a model of successful open source projects. Comparison of the open source model to XBRL enables us to identify a number of interesting similarities and differences. In common with open source projects, the benefits and progress of XBRL have been overstated and 'hyped' by enthusiastic participants. While XBRL is an open data standard in terms of access to the equivalent of its 'source code' we find that the governance structure of the XBRL consortium is significantly different to a model open source approach. The barrier to participation that is created by requiring paid membership and a focus on transacting business at physical conferences and meetings is identified as particularly critical. Decisions about the technical structure of XBRL, the regulator-led pattern of adoption and the organisation of XII are discussed. Finally areas for future research are identified.
Resumo:
Monitoring land-cover changes on sites of conservation importance allows environmental problems to be detected, solutions to be developed and the effectiveness of actions to be assessed. However, the remoteness of many sites or a lack of resources means these data are frequently not available. Remote sensing may provide a solution, but large-scale mapping and change detection may not be appropriate, necessitating site-level assessments. These need to be easy to undertake, rapid and cheap. We present an example of a Web-based solution based on free and open-source software and standards (including PostGIS, OpenLayers, Web Map Services, Web Feature Services and GeoServer) to support assessments of land-cover change (and validation of global land-cover maps). Authorised users are provided with means to assess land-cover visually and may optionally provide uncertainty information at various levels: from a general rating of their confidence in an assessment to a quantification of the proportions of land-cover types within a reference area. Versions of this tool have been developed for the TREES-3 initiative (Simonetti, Beuchle and Eva, 2011). This monitors tropical land-cover change through ground-truthing at latitude / longitude degree confluence points, and for monitoring of change within and around Important Bird Areas (IBAs) by Birdlife International and the Royal Society for the Protection of Birds (RSPB). In this paper we present results from the second of these applications. We also present further details on the potential use of the land-cover change assessment tool on sites of recognised conservation importance, in combination with NDVI and other time series data from the eStation (a system for receiving, processing and disseminating environmental data). We show how the tool can be used to increase the usability of earth observation data by local stakeholders and experts, and assist in evaluating the impact of protection regimes on land-cover change.
Resumo:
Developing Cyber-Physical Systems requires methods and tools to support simulation and verification of hybrid (both continuous and discrete) models. The Acumen modeling and simulation language is an open source testbed for exploring the design space of what rigorousbut- practical next-generation tools can deliver to developers of Cyber- Physical Systems. Like verification tools, a design goal for Acumen is to provide rigorous results. Like simulation tools, it aims to be intuitive, practical, and scalable. However, it is far from evident whether these two goals can be achieved simultaneously. This paper explains the primary design goals for Acumen, the core challenges that must be addressed in order to achieve these goals, the “agile research method” taken by the project, the steps taken to realize these goals, the key lessons learned, and the emerging language design.
Resumo:
Two-dimensional 'Mercedes Benz' (MB) or BN2D water model (Naim, 1971) is implemented in Molecular Dynamics. It is known that the MB model can capture abnormal properties of real water (high heat capacity, minima of pressure and isothermal compressibility, negative thermal expansion coefficient) (Silverstein et al., 1998). In this work formulas for calculating the thermodynamic, structural and dynamic properties in microcanonical (NVE) and isothermal-isobaric (NPT) ensembles for the model from Molecular Dynamics simulation are derived and verified against known Monte Carlo results. The convergence of the thermodynamic properties and the system's numerical stability are investigated. The results qualitatively reproduce the peculiarities of real water making the model a visually convenient tool that also requires less computational resources, thus allowing simulations of large (hydrodynamic scale) molecular systems. We provide the open source code written in C/C++ for the BN2D water model implementation using Molecular Dynamics.
Resumo:
GitHub is the most popular repository for open source code (Finley 2011). It has more than 3.5 million users, as the company declared in April 2013, and more than 10 million repositories, as of December 2013. It has a publicly accessible API and, since March 2012, it also publishes a stream of all the events occurring on public projects. Interactions among GitHub users are of a complex nature and take place in different forms. Developers create and fork repositories, push code, approve code pushed by others, bookmark their favorite projects and follow other developers to keep track of their activities. In this paper we present a characterization of GitHub, as both a social network and a collaborative platform. To the best of our knowledge, this is the first quantitative study about the interactions happening on GitHub. We analyze the logs from the service over 18 months (between March 11, 2012 and September 11, 2013), describing 183.54 million events and we obtain information about 2.19 million users and 5.68 million repositories, both growing linearly in time. We show that the distributions of the number of contributors per project, watchers per project and followers per user show a power-law-like shape. We analyze social ties and repository-mediated collaboration patterns, and we observe a remarkably low level of reciprocity of the social connections. We also measure the activity of each user in terms of authored events and we observe that very active users do not necessarily have a large number of followers. Finally, we provide a geographic characterization of the centers of activity and we investigate how distance influences collaboration.
Resumo:
In the UK, Open Learning has been used in industrial training for at least the last decade. Trainers and Open Learning practitioners have been concerned about the quality of the products and services being delivered. The argument put forward in this thesis is that there is ambiguity amongst industrialists over the meanings of `Open Learning' and `Quality in Open Learning'. For clarity, a new definition of Open Learning is proposed which challenges the traditional learner-centred approach favoured by educationalists. It introduces the concept that there are benefits afforded to the trainer/employer/teacher as well as to the learner. This enables a focussed view of what quality in Open Learning really means. Having discussed these issues, a new quantitative method of evaluating Open Learning is proposed. This is based upon an assessment of the degree of compliance with which products meet Parts 1 & 2 of the Open Learning Code of Practice. The vehicle for these research studies has been a commercial contract commissioned by the Training Agency for the Engineering Industry Training Board (EITB) to examine the quality of Open Learning products supplied to the engineering industry. A major part of this research has been the application of the evaluation technique to a range of 67 Open Learning products (in eight subject areas). The findings were that good quality products can be found right across the price range - so can average and poor quality ones. The study also shows quite convincingly that there are good quality products to be found at less than 50. Finally the majority (24 out of 34) of the good quality products were text based.
Resumo:
The potential for sharing environmental data and models is huge, but can be challenging for experts without specific programming expertise. We built an open-source, cross-platform framework (‘Tzar’) to run models across distributed machines. Tzar is simple to set up and use, allows dynamic parameter generation and enhances reproducibility by accessing versioned data and code. Combining Tzar with Docker helps us lower the entry barrier further by versioning and bundling all required modules and dependencies, together with the database needed to schedule work.