21 resultados para Factory of software

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolvability of a software artifact is its capacity for producing heritable or reusable variants; the inverse quality is the artifact's inertia or resistance to evolutionary change. Evolvability in software systems may arise from engineering and/or self-organising processes. We describe our 'Conditional Growth' simulation model of software evolution and show how, it can be used to investigate evolvability from a self-organisation perspective. The model is derived from the Bak-Sneppen family of 'self-organised criticality' simulations. It shows good qualitative agreement with Lehman's 'laws of software evolution' and reproduces phenomena that have been observed empirically. The model suggests interesting predictions about the dynamics of evolvability and implies that much of the observed variability in software evolution can be accounted for by comparatively simple self-organising processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The SPE taxonomy of evolving software systems, first proposed by Lehman in 1980, is re-examined in this work. The primary concepts of software evolution are related to generic theories of evolution, particularly Dawkins' concept of a replicator, to the hermeneutic tradition in philosophy and to Kuhn's concept of paradigm. These concepts provide the foundations that are needed for understanding the phenomenon of software evolution and for refining the definitions of the SPE categories. In particular, this work argues that a software system should be defined as of type P if its controlling stakeholders have made a strategic decision that the system must comply with a single paradigm in its representation of domain knowledge. The proposed refinement of SPE is expected to provide a more productive basis for developing testable hypotheses and models about possible differences in the evolution of E- and P-type systems than is provided by the original scheme. Copyright (C) 2005 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite the prediction of the demise of cities with the advance of new information and communication technologies in the New Economy, the software industry has emerged from cities in the USA, Europe and Asia in the past two decades. This article explores the reasons why cities are centers of software clusters, with reference to Boston, London and Dublin. It is suggested that cities' roles as centres of knowledge flows and creativity are the key determinants of their competitiveness in the knowledge-intensive software industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation ofsoftware factory’ paradigm. The model allows for automation via software tools implementing the concepts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A model based on graph isomorphisms is used to formalize software evolution. Step by step we narrow the search space by an informed selection of the attributes based on the current state-of-the-art in software engineering and generate a seed solution. We then traverse the resulting space using graph isomorphisms and other set operations over the vertex sets. The new solutions will preserve the desired attributes. The goal of defining an isomorphism based search mechanism is to construct predictors of evolution that can facilitate the automation ofsoftware factory’ paradigm. The model allows for automation via software tools implementing the concepts.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pair Programming is a technique from the software development method eXtreme Programming (XP) whereby two programmers work closely together to develop a piece of software. A similar approach has been used to develop a set of Assessment Learning Objects (ALO). Three members of academic staff have developed a set of ALOs for a total of three different modules (two with overlapping content). In each case a pair programming approach was taken to the development of the ALO. In addition to demonstrating the efficiency of this approach in terms of staff time spent developing the ALOs, a statistical analysis of the outcomes for students who made use of the ALOs is used to demonstrate the effectiveness of the ALOs produced via this method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a method to enhance fault localization for software systems based on a frequent pattern mining algorithm. Our method is based on a large set of test cases for a given set of programs in which faults can be detected. The test executions are recorded as function call trees. Based on test oracles the tests can be classified into successful and failing tests. A frequent pattern mining algorithm is used to identify frequent subtrees in successful and failing test executions. This information is used to rank functions according to their likelihood of containing a fault. The ranking suggests an order in which to examine the functions during fault analysis. We validate our approach experimentally using a subset of Siemens benchmark programs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes some of the preliminary outcomes of a UK project looking at control education. The focus is on two aspects: (i) the most important control concepts and theories for students doing just one or two courses and (ii) the effective use of software to improve student learning and engagement. There is also some discussion of the correct balance between teaching theory and practise. The paper gives examples from numerous UK universities and some industrial comment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Oxford University Press’s response to technological change in printing and publishing processes in this period can be considered in three phases: an initial period when the computerization of typesetting was seen as offering both cost savings and the ability to produce new editions of existing works more quickly; an intermediate phase when the emergence of standards in desktop computing allowed experiments with the sale of software as well as packaged electronic publications; and a third phase when the availability of the world wide web as a means of distribution allowed OUP to return to publishing in its traditional areas of strength albeit in new formats. Each of these phases demonstrates a tension between a desire to develop centralized systems and expertise, and a recognition that dynamic publishing depends on distributed decision-making and innovation. Alongside these developments in production and distribution lay developments in computer support for managerial and collaborative publishing processes, often involving the same personnel and sometimes the same equipment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper makes a contribution in bridging the theory and practice of the polyhedral model for designing parallel algorithms. Although the theory of polyhedral model is well developed, designers of massively parallel algorithms are unable to benefit from the theory due to the lack of software tools that incorporate the wide range of transformations that are possible in the model. The Uniformization tool that we developed was the first to integrate a number of techniques and to completely automate the transformation step allowing designers to explore a wide range of feasible designs from high-level specifications.