158 resultados para Software Re-use
Resumo:
Component software has many benefits, most notably increased software re-use; however, the component software process places heavy burdens on programming language technology, which modern object-oriented programming languages do not address. In particular, software components require specifications that are both sufficiently expressive and sufficiently abstract, and, where possible, these specifications should be checked formally by the programming language. This dissertation presents a programming language called Mentok that provides two novel programming language features enabling improved specification of stateful component roles. Negotiable interfaces are interface types extended with protocols, and allow specification of changing method availability, including some patterns of out-calls and re-entrance. Type layers are extensions to module signatures that allow specification of abstract control flow constraints through the interfaces of a component-based application. Development of Mentok's unique language features included creation of MentokC, the Mentok compiler, and formalization of key properties of Mentok in mini-languages called MentokP and MentokL.
Resumo:
Open Educational Resources (OER) are teaching, learning and research materials that have been released under an open licence that permits online access and re-use by others. The 2012 Paris OER Declaration encourages the open licensing of educational materials produced with public funds. Digital data and data sets produced as a result of scientific and non-scientific research are an increasingly important category of educational materials. This paper discusses the legal challenges presented when publicly funded research data is made available as OER, arising from intellectual property rights, confidentiality and information privacy laws, and the lack of a legal duty to ensure data quality. If these legal challenges are not understood, addressed and effectively managed, they may impede and restrict access to and re-use of research data. This paper identifies some of the legal challenges that need to be addressed and describes 10 proposed best practices which are recommended for adoption to so that publicly funded research data can be made available for access and re-use as OER.
Resumo:
Many construction industry decision-makers believe there is a lack of off-site manufacture (OSM) adoption for non-residential construction in Australia. Identification of construction business process was considered imperative in order to assist decision-makers to increase OSM utilisation. The premise that domain knowledge can be re-used to provide an intervention point in the construction process led a team of researchers to construct simple base-line process models for the complete construction process, segmented into six phases. Sixteen domain knowledge industry experts were asked to review the construction phase base-line models to answer the question “Where in the process illustrated by this base-line model phase is an OSM task?”. Through an iterative and generative process a number of off-site manufacture intervention points were identified and integrated into the process models. The re-use of industry expert domain knowledge provided suggestions for new ways to do basic tasks thus facilitating changes to current practice. It is expected that implementation of the new processes will lead to systemic industry change and thus a growth in productivity due to increased adoption of OSM.
Resumo:
Numerous statements and declarations have been made over recent decades in support of open access to research data. The growing recognition of the importance of open access to research data has been accompanied by calls on public research funding agencies and universities to facilitate better access to publicly funded research data so that it can be re-used and redistributed as public goods. International and inter-governmental bodies such as the ICSU/CODATA, the OECD and the European Union are strong supporters of open access to and re-use of publicly funded research data. This thesis focuses on the research data created by university researchers in Malaysian public universities whose research activities are funded by the Federal Government of Malaysia. Malaysia, like many countries, has not yet formulated a policy on open access to and re-use of publicly funded research data. Therefore, the aim of this thesis is to develop a policy to support the objective of enabling open access to and re-use of publicly funded research data in Malaysian public universities. Policy development is very important if the objective of enabling open access to and re-use of publicly funded research data is to be successfully achieved. In developing the policy, this thesis identifies a myriad of legal impediments arising from intellectual property rights, confidentiality, privacy and national security laws, novelty requirements in patent law and lack of a legal duty to ensure data quality. Legal impediments such as these have the effect of restricting, obstructing, hindering or slowing down the objective of enabling open access to and re-use of publicly funded research data. A key focus in the formulation of the policy was the need to resolve the various legal impediments that have been identified. This thesis analyses the existing policies and guidelines of Malaysian public universities to ascertain to what extent the legal impediments have been resolved. An international perspective is adopted by making a comparative analysis of the policies of public research funding agencies and universities in the United Kingdom, the United States and Australia to understand how they have dealt with the identified legal impediments. These countries have led the way in introducing policies which support open access to and re-use of publicly funded research data. As well as proposing a policy supporting open access to and re-use of publicly funded research data in Malaysian public universities, this thesis provides procedures for the implementation of the policy and guidelines for addressing the legal impediments to open access and re-use.
Resumo:
Business process modelling as a practice and research field has received great attention over recent years. Organizations invest significantly into process modelling in terms of training, tools, capabilities and resources. The return on this investment is a function of process model re-use, which we define as the recurring use of process models to support organizational work tasks. While prior research has examined re-use as a design principle, we explore re-use as a behaviour, because evidence suggest that analysts’ re-use of process models is indeed limited. In this paper we develop a two-stage conceptualization of the key object-, behaviour- and socioorganization-centric factors explaining process model re-use behaviour. We propose a theoretical model and detail implications for its operationalization and measurement. Our study can provide significant benefits to our understanding of process modelling and process model use as key practices in analysis and design.
Resumo:
QUT Software Finder is a searchable repository of metadata describing software and source code, which has been created as a result of QUT research activities. It was launched in December 2013. https://researchdatafinder.qut.edu.au/scf The registry was designed to aid the discovery and visibility of QUT research outputs and encourage sharing and re-use of code and software throughout the research community, both nationally and internationally. The repository platform used is VIVO (an open source product initially developed at Cornell University). QUT Software Finder records that describe software or code are connected to information about researchers involved, the research groups, related publications and related projects. Links to where the software or code can be accessed from are also provided alongside licencing and re-use information.
Resumo:
Key topics: Since the birth of the Open Source movement in the mid-80's, open source software has become more and more widespread. Amongst others, the Linux operating system, the Apache web server and the Firefox internet explorer have taken substantial market shares to their proprietary competitors. Open source software is governed by particular types of licenses. As proprietary licenses only allow the software's use in exchange for a fee, open source licenses grant users more rights like the free use, free copy, free modification and free distribution of the software, as well as free access to the source code. This new phenomenon has raised many managerial questions: organizational issues related to the system of governance that underlie such open source communities (Raymond, 1999a; Lerner and Tirole, 2002; Lee and Cole 2003; Mockus et al. 2000; Tuomi, 2000; Demil and Lecocq, 2006; O'Mahony and Ferraro, 2007;Fleming and Waguespack, 2007), collaborative innovation issues (Von Hippel, 2003; Von Krogh et al., 2003; Von Hippel and Von Krogh, 2003; Dahlander, 2005; Osterloh, 2007; David, 2008), issues related to the nature as well as the motivations of developers (Lerner and Tirole, 2002; Hertel, 2003; Dahlander and McKelvey, 2005; Jeppesen and Frederiksen, 2006), public policy and innovation issues (Jullien and Zimmermann, 2005; Lee, 2006), technological competitions issues related to standard battles between proprietary and open source software (Bonaccorsi and Rossi, 2003; Bonaccorsi et al. 2004, Economides and Katsamakas, 2005; Chen, 2007), intellectual property rights and licensing issues (Laat 2005; Lerner and Tirole, 2005; Gambardella, 2006; Determann et al., 2007). A major unresolved issue concerns open source business models and revenue capture, given that open source licenses imply no fee for users. On this topic, articles show that a commercial activity based on open source software is possible, as they describe different possible ways of doing business around open source (Raymond, 1999; Dahlander, 2004; Daffara, 2007; Bonaccorsi and Merito, 2007). These studies usually look at open source-based companies. Open source-based companies encompass a wide range of firms with different categories of activities: providers of packaged open source solutions, IT Services&Software Engineering firms and open source software publishers. However, business models implications are different for each of these categories: providers of packaged solutions and IT Services&Software Engineering firms' activities are based on software developed outside their boundaries, whereas commercial software publishers sponsor the development of the open source software. This paper focuses on open source software publishers' business models as this issue is even more crucial for this category of firms which take the risk of investing in the development of the software. Literature at last identifies and depicts only two generic types of business models for open source software publishers: the business models of ''bundling'' (Pal and Madanmohan, 2002; Dahlander 2004) and the dual licensing business models (Välimäki, 2003; Comino and Manenti, 2007). Nevertheless, these business models are not applicable in all circumstances. Methodology: The objectives of this paper are: (1) to explore in which contexts the two generic business models described in literature can be implemented successfully and (2) to depict an additional business model for open source software publishers which can be used in a different context. To do so, this paper draws upon an explorative case study of IdealX, a French open source security software publisher. This case study consists in a series of 3 interviews conducted between February 2005 and April 2006 with the co-founder and the business manager. It aims at depicting the process of IdealX's search for the appropriate business model between its creation in 2000 and 2006. This software publisher has tried both generic types of open source software publishers' business models before designing its own. Consequently, through IdealX's trials and errors, I investigate the conditions under which such generic business models can be effective. Moreover, this study describes the business model finally designed and adopted by IdealX: an additional open source software publisher's business model based on the principle of ''mutualisation'', which is applicable in a different context. Results and implications: Finally, this article contributes to ongoing empirical work within entrepreneurship and strategic management on open source software publishers' business models: it provides the characteristics of three generic business models (the business model of bundling, the dual licensing business model and the business model of mutualisation) as well as conditions under which they can be successfully implemented (regarding the type of product developed and the competencies of the firm). This paper also goes further into the traditional concept of business model used by scholars in the open source related literature. In this article, a business model is not only considered as a way of generating incomes (''revenue model'' (Amit and Zott, 2001)), but rather as the necessary conjunction of value creation and value capture, according to the recent literature about business models (Amit and Zott, 2001; Chresbrough and Rosenblum, 2002; Teece, 2007). Consequently, this paper analyses the business models from these two components' point of view.
Resumo:
Facility managers have to acquire, integrate, edit and update diverse facility information ranging from building elements & fabric data, operational costs, contract types, room allocation, logistics, maintenance, etc. With the advent of standardized Building Information Models (BIM) such as the Industry Foundation Classes (IFC) new opportunities are available for Facility Managers to manage their FM data. The usage of IFC supports data interoperability between different software systems including the use of operational data for facility management systems. Besides the re-use of building data, the Building Information Model can be used as an information framework for storing and retrieving FM related data. Currently several BIM driven FM systems are available including IFC compliant ones. These systems have the potential to not only manage primary data more effectively but also to offer practical systems for detailed monitoring, and analysis of facility performance that can underpin innovative and more cost effective management of complex facilities.
Resumo:
Computational biology increasingly demands the sharing of sophisticated data and annotations between research groups. Web 2.0 style sharing and publication requires that biological systems be described in well-defined, yet flexible and extensible formats which enhance exchange and re-use. In contrast to many of the standards for exchange in the genomic sciences, descriptions of biological sequences show a great diversity in format and function, impeding the definition and exchange of sequence patterns. In this presentation, we introduce BioPatML, an XML-based pattern description language that supports a wide range of patterns and allows the construction of complex, hierarchically structured patterns and pattern libraries. BioPatML unifies the diversity of current pattern description languages and fills a gap in the set of XML-based description languages for biological systems. We discuss the structure and elements of the language, and demonstrate its advantages on a series of applications, showing lightweight integration between the BioPatML parser and search engine, and the SilverGene genome browser. We conclude by describing our site to enable large scale pattern sharing, and our efforts to seed this repository.
Resumo:
Business process models are becoming available in large numbers due to their popular use in many industrial applications such as enterprise and quality engineering projects. On the one hand, this raises a challenge as to their proper management: How can it be ensured that the proper process model is always available to the interested stakeholder? On the other hand, the richness of a large set of process models also offers opportunities, for example with respect to the re-use of existing model parts for new models. This paper describes the functionalities and architecture of an advanced process model repository, named APROMORE. This tool brings together a rich set of features for the analysis, management and usage of large sets of process models, drawing from state-of-the art research in the field of process modeling. A prototype of the platform is presented in this paper, demonstrating its feasibility, as well as an outlook on the further development of APROMORE.
Resumo:
This paper investigates how software designers use their knowledge during the design process. The research is based on the analysis of the observational and verbal data from three software design teams generated during the conceptual stage of the design process. The knowledge captured from the analysis of the mapped design team data is utilized to generate descriptive models of novice and expert designers. These models contribute to a better understanding of the connections between, and integration of, designer variables, and to a better understanding of software design expertise and its development. The models are transferable to other domains.
Resumo:
miRDeep and its varieties are widely used to quantify known and novel micro RNA (miRNA) from small RNA sequencing (RNAseq). This article describes miRDeep*, our integrated miRNA identification tool, which is modeled off miRDeep, but the precision of detecting novel miRNAs is improved by introducing new strategies to identify precursor miRNAs. miRDeep* has a user-friendly graphic interface and accepts raw data in FastQ and Sequence Alignment Map (SAM) or the binary equivalent (BAM) format. Known and novel miRNA expression levels, as measured by the number of reads, are displayed in an interface, which shows each RNAseq read relative to the pre-miRNA hairpin. The secondary pre-miRNA structure and read locations for each predicted miRNA are shown and kept in a separate figure file. Moreover, the target genes of known and novel miRNAs are predicted using the TargetScan algorithm, and the targets are ranked according to the confidence score. miRDeep* is an integrated standalone application where sequence alignment, pre-miRNA secondary structure calculation and graphical display are purely Java coded. This application tool can be executed using a normal personal computer with 1.5 GB of memory. Further, we show that miRDeep* outperformed existing miRNA prediction tools using our LNCaP and other small RNAseq datasets. miRDeep* is freely available online at http://www.australianprostatecentre.org/research/software/mirdeep-star
Resumo:
This paper describes the 3D Water Chemistry Atlas - an open source, Web-based system that enables the three-dimensional (3D) sub-surface visualization of ground water monitoring data, overlaid on the local geological model. Following a review of existing technologies, the system adopts Cesium (an open source Web-based 3D mapping and visualization interface) together with a PostGreSQL/PostGIS database, for the technical architecture. In addition a range of the search, filtering, browse and analysis tools were developed that enable users to interactively explore the groundwater monitoring data and interpret it spatially and temporally relative to the local geological formations and aquifers via the Cesium interface. The result is an integrated 3D visualization system that enables environmental managers and regulators to assess groundwater conditions, identify inconsistencies in the data, manage impacts and risks and make more informed decisions about activities such as coal seam gas extraction, waste water extraction and re-use.
Resumo:
Enterprise System (ES) implementation and management are knowledge intensive tasks that inevitably draw upon the experience of a wide range of people with diverse knowledge capabilities. Knowledge Management (KM) has been identified as a critical success factor in ES projects. Despite the recognized importance of managing knowledge for ES benefits realization, systematic attempts to conceptualize KM-structures have been few. Where the adequacy of KM-structures is assessed, the process and measures are typically idiosyncratic and lack credibility. Using the ‘KM-process’, itself based in sociology of knowledge, this paper conceptualizes four main constructs to measure the adequacy of KM-structures. The SEM model is tested using 310 responses gathered from 27 ES installations that had implemented SAP R/3. The findings reveal six constructs for KM-structure. Furthermore, the paper demonstrates the application of KM-structures in the context of ES using the Adaptive Structuration Theory. The results demonstrate that having adequate KM-structures in place, while necessary, is not sufficient. These rules and resources must be appropriated to have greater positive influence on the Enterprise System. Furthermore, the study provides empirical support for knowledge-based theory by illustrating the importance of knowledge use/re-use (vs. knowledge creation) as the most important driver in the process of KM.