923 resultados para AUTOMATED SOFTWARE ENGINEERING
Resumo:
Nowadays, data mining is based on low-level specications of the employed techniques typically bounded to a specic analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Here, we propose a model-driven approach based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (via data-warehousing technology) and the analysis models for data mining (tailored to a specic platform). Thus, analysts can concentrate on the analysis problem via conceptual data-mining models instead of low-level programming tasks related to the underlying-platform technical details. These tasks are now entrusted to the model-transformations scaffolding.
Resumo:
Data mining is one of the most important analysis techniques to automatically extract knowledge from large amount of data. Nowadays, data mining is based on low-level specifications of the employed techniques typically bounded to a specific analysis platform. Therefore, data mining lacks a modelling architecture that allows analysts to consider it as a truly software-engineering process. Bearing in mind this situation, we propose a model-driven approach which is based on (i) a conceptual modelling framework for data mining, and (ii) a set of model transformations to automatically generate both the data under analysis (that is deployed via data-warehousing technology) and the analysis models for data mining (tailored to a specific platform). Thus, analysts can concentrate on understanding the analysis problem via conceptual data-mining models instead of wasting efforts on low-level programming tasks related to the underlying-platform technical details. These time consuming tasks are now entrusted to the model-transformations scaffolding. The feasibility of our approach is shown by means of a hypothetical data-mining scenario where a time series analysis is required.
Empirical study on the maintainability of Web applications: Model-driven Engineering vs Code-centric
Resumo:
Model-driven Engineering (MDE) approaches are often acknowledged to improve the maintainability of the resulting applications. However, there is a scarcity of empirical evidence that backs their claimed benefits and limitations with respect to code-centric approaches. The purpose of this paper is to compare the performance and satisfaction of junior software maintainers while executing maintainability tasks on Web applications with two different development approaches, one being OOH4RIA, a model-driven approach, and the other being a code-centric approach based on Visual Studio .NET and the Agile Unified Process. We have conducted a quasi-experiment with 27 graduated students from the University of Alicante. They were randomly divided into two groups, and each group was assigned to a different Web application on which they performed a set of maintainability tasks. The results show that maintaining Web applications with OOH4RIA clearly improves the performance of subjects. It also tips the satisfaction balance in favor of OOH4RIA, although not significantly. Model-driven development methods seem to improve both the developers’ objective performance and subjective opinions on ease of use of the method. This notwithstanding, further experimentation is needed to be able to generalize the results to different populations, methods, languages and tools, different domains and different application sizes.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-04
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Software Configuration Management is the discipline of managing large collections of software development artefacts from which software products are built. Software configuration management tools typically deal with artefacts at fine levels of granularity - such as individual source code files - and assist with coordination of changes to such artefacts. This paper describes a lightweight tool, designed to be used on top of a traditional file-based configuration management system. The add-on tool support enables users to flexibly define new hierarchical views of product structure, independent of the underlying artefact-repository structure. The tool extracts configuration and change data with respect to the user-defined hierarchy, leading to improved visibility of how individual subsystems have changed. The approach yields a range of new capabilities for build managers, and verification and validation teams. The paper includes a description of our experience using the tool in an organization that builds large embedded software systems.
Resumo:
While object-oriented programming offers great solutions for today's software developers, this success has created difficult problems in class documentation and testing. In Java, two tools provide assistance: Javadoc allows class interface documentation to be embedded as code comments and JUnit supports unit testing by providing assert constructs and a test framework. This paper describes JUnitDoc, an integration of Javadoc and JUnit, which provides better support for class documentation and testing. With JUnitDoc, test cases are embedded in Javadoc comments and used as both examples for documentation and test cases for quality assurance. JUnitDoc extracts the test cases for use in HTML files serving as class documentation and in JUnit drivers for class testing. To address the difficult problem of testing inheritance hierarchies, JUnitDoc provides a novel solution in the form of a parallel test hierarchy. A small controlled experiment compares the readability of JUnitDoc documentation to formal documentation written in Object-Z. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
Processor emulators are a software tool for allowing legacy computer programs to be executed on a modern processor. In the past emulators have been used in trivial applications such as maintenance of video games. Now, however, processor emulation is being applied to safety-critical control systems, including military avionics. These applications demand utmost guarantees of correctness, but no verification techniques exist for proving that an emulated system preserves the original system’s functional and timing properties. Here we show how this can be done by combining concepts previously used for reasoning about real-time program compilation, coupled with an understanding of the new and old software architectures. In particular, we show how both the old and new systems can be given a common semantics, thus allowing their behaviours to be compared directly.
Resumo:
Real-time software systems are rarely developed once and left to run. They are subject to changes of requirements as the applications they support expand, and they commonly outlive the platforms they were designed to run on. A successful real-time system is duplicated and adapted to a variety of applications - it becomes a product line. Current methods for real-time software development are commonly based on low-level programming languages and involve considerable duplication of effort when a similar system is to be developed or the hardware platform changes. To provide more dependable, flexible and maintainable real-time systems at a lower cost what is needed is a platform-independent approach to real-time systems development. The development process is composed of two phases: a platform-independent phase, that defines the desired system behaviour and develops a platform-independent design and implementation, and a platform-dependent phase that maps the implementation onto the target platform. The last phase should be highly automated. For critical systems, assessing dependability is crucial. The partitioning into platform dependent and independent phases has to support verification of system properties through both phases.
Resumo:
A framework is a reusable design that requires software components to function. To instantiate a framework, a software engineer must provide the software components required by the framework. To do this effectively, the framework-component interfaces must be specified so the software engineer knows what assumptions the framework makes about the components, and so the components can be verified against these assumptions. This paper presents an approach to specifying software frameworks. The approach involves the specification of the framework’s syntax, semantics, and the interfaces between the framework and its components. The approach is demonstrated with a simple case study.
Resumo:
Proof reuse, or analogical reasoning, involves reusing the proof of a source theorem in the proof of a target conjecture. We have developed a method for proof reuse that is based on the generalisation replay paradigm described in the literature, in which a generalisation of the source proof is replayed to construct the target proof. In this paper, we describe the novel aspects of our method, which include a technique for producing more accurate source proof generalisations (using knowledge of the target goal), as well as a flexible replay strategy that allows the user to set various parameters to control the size and the shape of the search space. Finally, we report on the results of applying this method to a case study from the realm of software verification.
Resumo:
It is not surprising that students are unconvinced about the benefits of formal methods if we do not show them how these methods can be integrated with other activities in the software lifecycle. In this paper, we describe an approach to integrating formal specification with more traditional verification and validation techniques in a course that teaches formal specification and specification-based testing. This is accomplished through a series of assignments on a single software component that involves specifying the component in Object-Z, validating that specification using inspection and a specification animation tool, and then testing an implementation of the specification using test cases derived from the formal specification.
Resumo:
Global Software Development (GSD) is an emerging distributive software engineering practice, in which a higher communication overhead due to temporal and geographical separation among developers is traded with gains in reduced development cost, improved flexibility and mobility for developers, increased access to skilled resource-pools and convenience of customer involvements. However, due to its distributive nature, GSD faces many fresh challenges in aspects relating to project coordination, awareness, collaborative coding and effective communication. New software engineering methodologies and processes are required to address these issues. Research has shown that, with adequate support tools, Distributed Extreme Programming (DXP) – a distributive variant of an agile methodology – Extreme Programming (XP) can be both efficient and beneficial to GDS projects. In this paper, we present the design and realization of a collaborative environment, called Moomba, which assists a distributed team in both instantiation and execution of a DXP process in GSD projects.
Resumo:
A major challenge in teaching software engineering to undergraduates is that most students have limited industry experience, so the problems addressed are unknown and hence unappreciated. Issues of scope prevent a realistic software engineering experience, and students often graduate with a simplistic view of software engineering’s challenges. Problems and Programmers (PnP) is a competitive, physical card game that simulates the software engineering process from requirements specification to product delivery. Deliverables are abstracted, allowing a focus on process issues and for lessons to be learned in a relatively short time. The rules are easy to understand and the game’s physical nature allows for face-to-face interaction between players. The game’s developers have described PnP in previous publications, but this paper reports the game’s use within a larger educational scheme. Students learn and play PnP, and then are required to create a software requirements specification based on the game. Finally, students reflect on the game’s strengths and weaknesses and their experiences in an individual essay. The paper discusses this approach, students’ experiences and overall outcomes, and offers an independent, critical look at the game, its use, and potential improvements.