93 resultados para software engineering: metrics
Resumo:
Abstract. The ASSERT project de?ned new software engineering methods and tools for the development of critical embedded real-time systems in the space domain. The ASSERT model-driven engineering process was one of the achievements of the project and is based on the concept of property- preserving model transformations. The key element of this process is that non-functional properties of the software system must be preserved during model transformations. Properties preservation is carried out through model transformations compliant with the Ravenscar Pro?le and provides a formal basis to the process. In this way, the so-called Ravenscar Computational Model is central to the whole ASSERT process. This paper describes the work done in the HWSWCO study, whose main objective has been to address the integration of the Hardware/Software co-design phase in the ASSERT process. In order to do that, non-functional properties of the software system must also be preserved during hardware synthesis. Keywords : Ada 2005, Ravenscar pro?le, Hardware/Software co-design, real- time systems, high-integrity systems, ORK
Resumo:
Context: Measurement is crucial and important to empirical software engineering. Although reliability and validity are two important properties warranting consideration in measurement processes, they may be influenced by random or systematic error (bias) depending on which metric is used. Aim: Check whether, the simple subjective metrics used in empirical software engineering studies are prone to bias. Method: Comparison of the reliability of a family of empirical studies on requirements elicitation that explore the same phenomenon using different design types and objective and subjective metrics. Results: The objectively measured variables (experience and knowledge) tend to achieve more reliable results, whereas subjective metrics using Likert scales (expertise and familiarity) tend to be influenced by systematic error or bias. Conclusions: Studies that predominantly use variables measured subjectively, like opinion polls or expert opinion acquisition.
Resumo:
Usability is the capability of the software product to be understood, learned, used and attractive to the user, when used under specified conditions. Many studies demonstrate the benefits of usability, yet to this day software products continue to exhibit consistently low levels of this quality attribute. Furthermore, poor usability in software systems contributes largely to software failing in actual use. One of the main disciplines involved in usability is that of Human-Computer Interaction (HCI). Over the past two decades the HCI community has proposed specific features that should be present in applications to improve their usability, yet incorporating them into software continues to be far from trivial for software developers. These difficulties are due to multiple factors, including the high level of abstraction at which these HCI recommendations are made and how far removed they are from actual software implementation. In order to bridge this gap, the Software Engineering community has long proposed software design solutions to help developers include usability features into software, however, the problem remains an open research question. This doctoral thesis addresses the problem of helping software developers include specific usability features into their applications by providing them with a structured and tangible guidance in the form of a process, which we have termed the Usability-Oriented Software Development Process. This process is supported by a set of Software Usability Guidelines that help developers to incorporate a set of eleven usability features with high impact on software design. After developing the Usability-oriented Software Development Process and the Software Usability Guidelines, they have been validated across multiple academic projects and proven to help software developers to include such usability features into their software applications. In doing so, their use significantly reduced development time and improved the quality of the resulting designs of these projects. Furthermore, in this work we propose a software tool to automate the application of the proposed process. In sum, this work contributes to the integration of the Software Engineering and HCI disciplines providing a framework that helps software developers to create usable applications in an efficient way.
Resumo:
For years, the Human Computer Interaction (HCI) community has crafted usability guidelines that clearly define what characteristics a software system should have in order to be easy to use. However, in the Software Engineering (SE) community keep falling short of successfully incorporating these recommendations into software projects. From a SE perspective, the process of incorporating usability features into software is not always straightforward, as a large number of these features have heavy implications in the underlying software architecture. For example, successfully including an “undo” feature in an application requires the design and implementation of many complex interrelated data structures and functionalities. Our work is focused upon providing developers with a set of software design patterns to assist them in the process of designing more usable software. This would contribute to the proper inclusion of specific usability features with high impact on the software design. Preliminary validation data show that usage of the guidelines also has positive effects on development time and overall software design quality.
Resumo:
Replication of software engineering experiments is crucial for dealing with validity threats to experiments in this area. Even though the empirical software engineering community is aware of the importance of replication, the replication rate is still very low. The RESER'11 Joint Replication Project aims to tackle this problem by simultaneously running a series of several replications of the same experiment. In this article, we report the results of the replication run at the Universidad Politécnica de Madrid. Our results are inconsistent with the original experiment. However, we have identified possible causes for them. We also discuss our experiences (in terms of pros and cons) during the replication.
Resumo:
In computer science, different types of reusable components for building software applications were proposed as a direct consequence of the emergence of new software programming paradigms. The success of these components for building applications depends on factors such as the flexibility in their combination or the facility for their selection in centralised or distributed environments such as internet. In this article, we propose a general type of reusable component, called primitive of representation, inspired by a knowledge-based approach that can promote reusability. The proposal can be understood as a generalisation of existing partial solutions that is applicable to both software and knowledge engineering for the development of hybrid applications that integrate conventional and knowledge based techniques. The article presents the structure and use of the component and describes our recent experience in the development of real-world applications based on this approach.
Resumo:
Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.
Resumo:
Software Configuration Management (SCM) techniques have been considered the entry point to rigorous software engineering, where multiple organizations cooperate in a decentralized mode to save resources, ensure the quality of the diversity of software products, and manage corporate information to get a better return of investment. The incessant trend of Global Software Development (GSD) and the complexity of implementing a correct SCM solution grow not only because of the changing circumstances, but also because of the interactions and the forces related to GSD activities. This paper addresses the role SCM plays in the development of commercial products and systems, and introduces a SCM reference model to describe the relationships between the different technical, organizational, and product concerns any software development company should support in the global market.
Resumo:
The focus of this paper is to outline the main structure of an alternative software process improvement method for small- and medium-size enterprises. This method is based on the action package concept, which helps to institutionalize the effective practices with affordable implementation costs. This paper also presents the results and lessons learned when this method was applied to three enterprises in the requirements engineering domain.
Resumo:
This paper proposes a highly automated mechanism to build an undo facility into a new or existing system easily. Our proposal is based on the observation that for a large set of operators it is not necessary to store in-memory object states or executed system commands to undo an action; the storage of input data is instead enough. This strategy simplifies greatly the design of the undo process and encapsulates most of the functionalities required in a framework structure similar to the many object-oriented programming frameworks.
Resumo:
Abstract?Background: There is no globally accepted open source software development process to define how open source software is developed in practice. A process description is important for coordinating all the software development activities involving both people and technology. Aim: The research question that this study sets out to answer is: What activities do open source software process models contain? The activity groups on which it focuses are Concept Exploration, Software Requirements, Design, Maintenance and Evaluation. Method: We conduct a systematic mapping study (SMS). A SMS is a form of systematic literature review that aims to identify and classify available research papers concerning a particular issue. Results: We located a total of 29 primary studies, which we categorized by the open source software project that they examine and by activity types (Concept Exploration, Software Requirements, Design, Maintenance and Evaluation). The activities present in most of the open source software development processes were Execute Tests and Conduct Reviews, which belong to the Evaluation activities group. Maintenance is the only group that has primary studies addressing all the activities that it contains. Conclusions: The primary studies located by the SMS are the starting point for analyzing the open source software development process and proposing a process model for this community. The papers in our paper pool that describe a specific open source software project provide more regarding our research question than the papers that talk about open source software development without referring to a specific open source software project.
Resumo:
An accepted fact in software engineering is that software must undergo verification and validation process during development to ascertain and improve its quality level. But there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products. Though, some knowledge is available on the strengths and weaknesses of the available software quality assurance techniques but not much is known yet on the relationship between different techniques and contextual behavior of the techniques. Objective: This research investigates the effectiveness of two testing techniques ? equivalence class partitioning and decision coverage and one review technique ? code review by abstraction, in terms of their fault detection capability. This will be used to strengthen the practical knowledge available on these techniques.
Resumo:
La Ingeniería del Software Empírico (ISE) utiliza como herramientas los estudios empíricos para conseguir evidencias que ayuden a conocer bajo qué circunstancias es mejor usar una tecnología software en lugar de otra. La investigación en la que se enmarca este TFM explora si las intuiciones y/o preferencias de las personas que realizan las pruebas de software, son capaces de predecir la efectividad de tres técnicas de evaluación de código: lectura por abstracciones sucesivas, cobertura de decisión y partición en clases de equivalencia. Para conseguir dicho objetivo, se analizan los datos recogidos en un estudio empírico, realizado por las tutoras de este TFM. En el estudio empírico distintos sujetos aplican las tres técnicas de evaluación de código a tres programas distintos, a los que se les habían introducido una serie de faltas artificialmente. Los sujetos deben reportar los fallos encontrados en los programas, así como, contestar a una serie de preguntas sobre sus intuiciones y preferencias. A la hora de analizar los datos del estudio, se ha comprobado: 1) cuáles son sus intuiciones y preferencias (mediante el test estadístico X2 de Pearson); 2) si los sujetos cambian de opinión después de aplicar las técnicas (para ello se ha utilizado índice de Kappa, el Test de McNemar-Bowker y el Test de Stuart-Maxwell); 3) la consistencia de las distintas preguntas (mediante el índice de Kappa), comparando: intuiciones con intuiciones, preferencias con preferencias e intuiciones con preferencias; 4) Por último, si hay coincidencia entre las intuiciones y preferencias con la efectividad real obtenida (para ello se ha utilizado, el Modelo Lineal General con medidas repetidas). Los resultados muestran que, no hay una intuición clara ni tampoco una preferencia concreta, con respecto a los programas. Además aunque existen cambios de opinión después de aplicar las técnicas, no se encuentran evidencias claras para afirmar que la intuición y preferencias influyen en su efectividad. Finalmente, existen relaciones entre las intuiciones con intuiciones, preferencias con preferencias e intuiciones con preferencias, además esta relación es más notoria después de aplicar las técnicas. ----ABSTRACT----Empirical Software Engineering (ESE) uses empirical studies as a mean to generate evidences to help determine under what circumstances it is convenient to use a given software technology. This Master Thesis is part of a research that explores whether intuitions and/or preferences of testers, can be used to predict the effectiveness of three code evaluation techniques: reading by stepwise abstractions, decision coverage and equivalence partitioning. To achieve this goal, this Master Thesis analyzes the data collected in an empirical study run by the tutors. In the empirical study, different subjects apply three code evaluation techniques to three different programs. A series of faults were artificially introduced to the programs. Subjects are required to report the defects found in the programs, as well as answer a series of questions about their intuitions and preferences. The data analyses test: 1) what are the intuitions and preferences of the subjects (using the Pearson X2 test); 2) whether subjects change their minds after applying the techniques (using the Kappa coefficient, McNemar-Bowker test, and Stuart-Maxwell test); 3) the consistency of the different questions, comparing: intuitions versus intuitions, preferences versus preferences and preferences versus intuitions (using the Kappa coefficient); 4) finally, if intuitions and/or preferences predict the actual effectiveness obtained (using the General Linear Model, repeated measures). The results show that there is not clear intuition or particular preference with respect to the programs. Moreover, although there are changes of mind after applying the techniques, there are not clear evidences to claim that intuition and preferences influence their effectiveness. Finally, there is a relationship between the intuitions versus intuitions, preferences versus preferences and intuitions versus preferences; this relationship is more noticeable after applying the techniques.
Resumo:
La Ingeniería del Software (IS) Empírica adopta el método científico a la IS para facilitar la generación de conocimiento. Una de las técnicas empleadas, es la realización de experimentos. Para que el conocimiento obtenido experimentalmente adquiera el nivel de madurez necesario para su posterior uso, es necesario que los experimentos sean replicados. La existencia de múltiples replicaciones de un mismo experimento conlleva la existencia de numerosas versiones de los distintos productos generados durante la realización de cada replicación. Actualmente existe un gran descontrol sobre estos productos, ya que la administración se realiza de manera informal. Esto causa problemas a la hora de planificar nuevas replicaciones, o intentar obtener información sobre las replicaciones ya realizadas. Para conocer con detalle la dimensión del problema a resolver, se estudia el estado actual de la gestión de materiales experimentales y su uso en replicaciones, así como de las herramientas de gestión de materiales experimentales. El estudio concluye que ninguno de los enfoques estudiados proporciona una solución al problema planteado. Este trabajo persigue como objetivo mejorar la administración de los materiales experimentales y replicaciones de experimentos en IS para dar soporte a la replicación de experimentos. Para satisfacer este objetivo, se propone la adopción en experimentación de los paradigmas de Gestión de Configuración del Software (GCS) y Línea de Producto Software (LPS). Para desarrollar la propuesta se decide utilizar el método de investigación acción (en inglés action research). Para adoptar la GCS a experimentación, se comienza realizando un estudio del proceso experimental como transformación de productos; a continuación, se realiza una adopción de conceptos fundamentada en los procesos del desarrollo software y de experimentación; finalmente, se desarrollan un conjunto de instrumentos, que se incorporan a un Plan de Gestión de Configuración de Experimentos (PGCE). Para adoptar la LPS a experimentación, se comienza realizando un estudio de los conceptos, actividades y fases que fundamentan la LPS; a continuación, se realiza una adopción de los conceptos; finalmente, se desarrollan o adoptan las técnicas, simbología y modelos para dar soporte a las fases de la Línea de Producto para Experimentación (LPE). La propuesta se valida mediante la evaluación de su: viabilidad, flexibilidad, usabilidad y satisfacción. La viabilidad y flexibilidad se evalúan mediante la instanciación del PGCE y de la LPE en experimentos concretos en IS. La usabilidad se evalúa mediante el uso de la propuesta para la generación de las instancias del PGCE y de LPE. La satisfacción evalúa la información sobre el experimento que contiene el PGCE y la LPE. Los resultados de la validación de la propuesta muestran mejores resultados en los aspectos de usabilidad y satisfacción a los experimentadores. ABSTRACT Empirical software engineering adapts the scientific method to software engineering (SE) in order to facilitate knowledge generation. Experimentation is one of the techniques used. For the knowledge generated experimentally to acquire the level of maturity necessary for later use, the experiments have to be replicated. As the same experiment is replicated more than once, there are numerous versions of all the products generated during a replication. These products are generally administered informally without control. This is troublesome when it comes to planning new replications or trying to gather information on replications conducted in the past. In order to grasp the size of the problem to be solved, this research examines the current state of the art of the management and use of experimental materials in replications, as well as the tools managing experimental materials. The study concludes that none of the analysed approaches provides a solution to the stated problem. The aim of this research is to improve the administration of SE experimental materials and experimental replications in support of experiment replication. To do this, we propose the adaptation of software configuration management (SCM) and software product line (SPL) paradigms to experimentation. The action research method was selected in order to develop this proposal. The first step in the adaptation of the SCM to experimentation was to analyse the experimental process from the viewpoint of the transformation of products. The concepts were then adapted based on software development and experimentation processes. Finally, a set of instruments were developed and added to an experiment configuration management plan (ECMP). The first step in the adaptation of the SPL to experimentation is to analyse the concepts, activities and phases underlying the SPL. The concepts are then adapted. Finally, techniques, symbols and models are developed or adapted in support of the experimentation product line (EPL) phases. The proposal is validated by evaluating its feasibility, flexibility, usability and satisfaction. Feasibility and flexibility are evaluated by instantiating the ECMP and the EPL in specific SE experiments. Usability is evaluated by using the proposal to generate the instances of the ECMP and EPL. The results of the validation of the proposal show that the proposal performs better with respect to usability issues and experimenter satisfaction.