925 resultados para software engineering practices
Resumo:
In computer science, different types of reusable components for building software applications were proposed as a direct consequence of the emergence of new software programming paradigms. The success of these components for building applications depends on factors such as the flexibility in their combination or the facility for their selection in centralised or distributed environments such as internet. In this article, we propose a general type of reusable component, called primitive of representation, inspired by a knowledge-based approach that can promote reusability. The proposal can be understood as a generalisation of existing partial solutions that is applicable to both software and knowledge engineering for the development of hybrid applications that integrate conventional and knowledge based techniques. The article presents the structure and use of the component and describes our recent experience in the development of real-world applications based on this approach.
Resumo:
Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.
Resumo:
Software Configuration Management (SCM) techniques have been considered the entry point to rigorous software engineering, where multiple organizations cooperate in a decentralized mode to save resources, ensure the quality of the diversity of software products, and manage corporate information to get a better return of investment. The incessant trend of Global Software Development (GSD) and the complexity of implementing a correct SCM solution grow not only because of the changing circumstances, but also because of the interactions and the forces related to GSD activities. This paper addresses the role SCM plays in the development of commercial products and systems, and introduces a SCM reference model to describe the relationships between the different technical, organizational, and product concerns any software development company should support in the global market.
Resumo:
The engineering careers models were diverse in Europe, and are adopting now in Spain the Bolonia process for European Universities. Separated from older Universities, that are in part technically active, Civil Engineering (Caminos, Canales y Puertos) started at end of 18th century in Spain adopting the French models of Upper Schools for state civil servants with exam at entry. After 1800 intense wars, to conserve forest regions Ingenieros de Montes appeared as Upper School, and in 1855 also the Ingenieros Agrónomos to push up related techniques and practices. Other Engineers appeared as Upper Schools but more towards private factories. These ES got all adapted Lower Schools of Ingeniero Tecnico. Recently both grew much in number and evolved, linked also to recognized Professions. Spanish society, into European Community, evolved across year 2000, in part highly well, but with severe discordances, that caused severe youth unemployment with 2008-2011 crisis. With Bolonia process high formal changes step in from 2010-11, accepted with intense adaptation. The Lower Schools are changing towards the Upper Schools, and both that have shifted since 2010-11 various 4-years careers (Grado), some included into the precedent Professions, and diverse Masters. Acceptation of them to get students has started relatively well, and will evolve, and acceptation of new grades for employment in Spain, Europe or outside will be essential. Each Grado has now quite rigid curricula and programs, MOODLE was introduced to connect pupils, some specific uses of Personal Computers are taught in each subject. Escuela de Agronomos centre, reorganized with its old name in its precedent buildings at entrance of Campus Moncloa, offers Grados of Agronomic Engineering and Science for various public and private activities for agriculture, Alimentary Engineering for alimentary activities and control, Agro-Environmental Engineering more related to environment activities, and in part Biotechnology also in laboratories in Campus Monte-Gancedo for Biotechnology of Plants and Computational Biotechnology. Curricula include Basics, Engineering, Practices, Visits, English, ?project of end of career?, Stays. Some masters will conduce to specific professional diploma, list includes now Agro-Engineering, Agro-Forestal Biotechnology, Agro and Natural Resources Economy, Complex Physical Systems, Gardening and Landscaping, Rural Genie, Phytogenetic Resources, Plant Genetic Resources, Environmental Technology for Sustainable Agriculture, Technology for Human Development and Cooperation.
Resumo:
This paper proposes a highly automated mechanism to build an undo facility into a new or existing system easily. Our proposal is based on the observation that for a large set of operators it is not necessary to store in-memory object states or executed system commands to undo an action; the storage of input data is instead enough. This strategy simplifies greatly the design of the undo process and encapsulates most of the functionalities required in a framework structure similar to the many object-oriented programming frameworks.
Resumo:
Abstract?Background: There is no globally accepted open source software development process to define how open source software is developed in practice. A process description is important for coordinating all the software development activities involving both people and technology. Aim: The research question that this study sets out to answer is: What activities do open source software process models contain? The activity groups on which it focuses are Concept Exploration, Software Requirements, Design, Maintenance and Evaluation. Method: We conduct a systematic mapping study (SMS). A SMS is a form of systematic literature review that aims to identify and classify available research papers concerning a particular issue. Results: We located a total of 29 primary studies, which we categorized by the open source software project that they examine and by activity types (Concept Exploration, Software Requirements, Design, Maintenance and Evaluation). The activities present in most of the open source software development processes were Execute Tests and Conduct Reviews, which belong to the Evaluation activities group. Maintenance is the only group that has primary studies addressing all the activities that it contains. Conclusions: The primary studies located by the SMS are the starting point for analyzing the open source software development process and proposing a process model for this community. The papers in our paper pool that describe a specific open source software project provide more regarding our research question than the papers that talk about open source software development without referring to a specific open source software project.
Resumo:
Antecedentes Europa vive una situación insostenible. Desde el 2008 se han reducido los recursos de los gobiernos a raíz de la crisis económica. El continente Europeo envejece con ritmo constante al punto que se prevé que en 2050 habrá sólo dos trabajadores por jubilado [54]. A esta situación se le añade el aumento de la incidencia de las enfermedades crónicas, relacionadas con el envejecimiento, cuyo coste puede alcanzar el 7% del PIB de un país [51]. Es necesario un cambio de paradigma. Una nueva manera de cuidar de la salud de las personas: sustentable, eficaz y preventiva más que curativa. Algunos estudios abogan por el cuidado personalizado de la salud (pHealth). En este modelo las prácticas médicas son adaptadas e individualizadas al paciente, desde la detección de los factores de riesgo hasta la personalización de los tratamientos basada en la respuesta del individuo [81]. El cuidado personalizado de la salud está asociado a menudo al uso de las tecnologías de la información y comunicación (TICs) que, con su desarrollo exponencial, ofrecen oportunidades interesantes para la mejora de la salud. El cambio de paradigma hacia el pHealth está lentamente ocurriendo, tanto en el ámbito de la investigación como en la industria, pero todavía no de manera significativa. Existen todavía muchas barreras relacionadas a la economía, a la política y la cultura. También existen barreras puramente tecnológicas, como la falta de sistemas de información interoperables [199]. A pesar de que los aspectos de interoperabilidad están evolucionando, todavía hace falta un diseño de referencia especialmente direccionado a la implementación y el despliegue en gran escala de sistemas basados en pHealth. La presente Tesis representa un intento de organizar la disciplina de la aplicación de las TICs al cuidado personalizado de la salud en un modelo de referencia, que permita la creación de plataformas de desarrollo de software para simplificar tareas comunes de desarrollo en este dominio. Preguntas de investigación RQ1 >Es posible definir un modelo, basado en técnicas de ingeniería del software, que represente el dominio del cuidado personalizado de la salud de una forma abstracta y representativa? RQ2 >Es posible construir una plataforma de desarrollo basada en este modelo? RQ3 >Esta plataforma ayuda a los desarrolladores a crear sistemas pHealth complejos e integrados? Métodos Para la descripción del modelo se adoptó el estándar ISO/IEC/IEEE 42010por ser lo suficientemente general y abstracto para el amplio enfoque de esta tesis [25]. El modelo está definido en varias partes: un modelo conceptual, expresado a través de mapas conceptuales que representan las partes interesadas (stakeholders), los artefactos y la información compartida; y escenarios y casos de uso para la descripción de sus funcionalidades. El modelo fue desarrollado de acuerdo a la información obtenida del análisis de la literatura, incluyendo 7 informes industriales y científicos, 9 estándares, 10 artículos en conferencias, 37 artículos en revistas, 25 páginas web y 5 libros. Basándose en el modelo se definieron los requisitos para la creación de la plataforma de desarrollo, enriquecidos por otros requisitos recolectados a través de una encuesta realizada a 11 ingenieros con experiencia en la rama. Para el desarrollo de la plataforma, se adoptó la metodología de integración continua [74] que permitió ejecutar tests automáticos en un servidor y también desplegar aplicaciones en una página web. En cuanto a la metodología utilizada para la validación se adoptó un marco para la formulación de teorías en la ingeniería del software [181]. Esto requiere el desarrollo de modelos y proposiciones que han de ser validados dentro de un ámbito de investigación definido, y que sirvan para guiar al investigador en la búsqueda de la evidencia necesaria para justificarla. La validación del modelo fue desarrollada mediante una encuesta online en tres rondas con un número creciente de invitados. El cuestionario fue enviado a 134 contactos y distribuido en algunos canales públicos como listas de correo y redes sociales. El objetivo era evaluar la legibilidad del modelo, su nivel de cobertura del dominio y su potencial utilidad en el diseño de sistemas derivados. El cuestionario incluía preguntas cuantitativas de tipo Likert y campos para recolección de comentarios. La plataforma de desarrollo fue validada en dos etapas. En la primera etapa se utilizó la plataforma en un experimento a pequeña escala, que consistió en una sesión de entrenamiento de 12 horas en la que 4 desarrolladores tuvieron que desarrollar algunos casos de uso y reunirse en un grupo focal para discutir su uso. La segunda etapa se realizó durante los tests de un proyecto en gran escala llamado HeartCycle [160]. En este proyecto un equipo de diseñadores y programadores desarrollaron tres aplicaciones en el campo de las enfermedades cardio-vasculares. Una de estas aplicaciones fue testeada en un ensayo clínico con pacientes reales. Al analizar el proyecto, el equipo de desarrollo se reunió en un grupo focal para identificar las ventajas y desventajas de la plataforma y su utilidad. Resultados Por lo que concierne el modelo que describe el dominio del pHealth, la parte conceptual incluye una descripción de los roles principales y las preocupaciones de los participantes, un modelo de los artefactos TIC que se usan comúnmente y un modelo para representar los datos típicos que son necesarios formalizar e intercambiar entre sistemas basados en pHealth. El modelo funcional incluye un conjunto de 18 escenarios, repartidos en: punto de vista de la persona asistida, punto de vista del cuidador, punto de vista del desarrollador, punto de vista de los proveedores de tecnologías y punto de vista de las autoridades; y un conjunto de 52 casos de uso repartidos en 6 categorías: actividades de la persona asistida, reacciones del sistema, actividades del cuidador, \engagement" del usuario, actividades del desarrollador y actividades de despliegue. Como resultado del cuestionario de validación del modelo, un total de 65 personas revisó el modelo proporcionando su nivel de acuerdo con las dimensiones evaluadas y un total de 248 comentarios sobre cómo mejorar el modelo. Los conocimientos de los participantes variaban desde la ingeniería del software (70%) hasta las especialidades médicas (15%), con declarado interés en eHealth (24%), mHealth (16%), Ambient Assisted Living (21%), medicina personalizada (5%), sistemas basados en pHealth (15%), informática médica (10%) e ingeniería biomédica (8%) con una media de 7.25_4.99 años de experiencia en estas áreas. Los resultados de la encuesta muestran que los expertos contactados consideran el modelo fácil de leer (media de 1.89_0.79 siendo 1 el valor más favorable y 5 el peor), suficientemente abstracto (1.99_0.88) y formal (2.13_0.77), con una cobertura suficiente del dominio (2.26_0.95), útil para describir el dominio (2.02_0.7) y para generar sistemas más específicos (2_0.75). Los expertos también reportan un interés parcial en utilizar el modelo en su trabajo (2.48_0.91). Gracias a sus comentarios, el modelo fue mejorado y enriquecido con conceptos que faltaban, aunque no se pudo demonstrar su mejora en las dimensiones evaluadas, dada la composición diferente de personas en las tres rondas de evaluación. Desde el modelo, se generó una plataforma de desarrollo llamada \pHealth Patient Platform (pHPP)". La plataforma desarrollada incluye librerías, herramientas de programación y desarrollo, un tutorial y una aplicación de ejemplo. Se definieron cuatro módulos principales de la arquitectura: el Data Collection Engine, que permite abstraer las fuentes de datos como sensores o servicios externos, mapeando los datos a bases de datos u ontologías, y permitiendo interacción basada en eventos; el GUI Engine, que abstrae la interfaz de usuario en un modelo de interacción basado en mensajes; y el Rule Engine, que proporciona a los desarrolladores un medio simple para programar la lógica de la aplicación en forma de reglas \if-then". Después de que la plataforma pHPP fue utilizada durante 5 años en el proyecto HeartCycle, 5 desarrolladores fueron reunidos en un grupo de discusión para analizar y evaluar la plataforma. De estas evaluaciones se concluye que la plataforma fue diseñada para encajar las necesidades de los ingenieros que trabajan en la rama, permitiendo la separación de problemas entre las distintas especialidades, y simplificando algunas tareas de desarrollo como el manejo de datos y la interacción asíncrona. A pesar de ello, se encontraron algunos defectos a causa de la inmadurez de algunas tecnologías empleadas, y la ausencia de algunas herramientas específicas para el dominio como el procesado de datos o algunos protocolos de comunicación relacionados con la salud. Dentro del proyecto HeartCycle la plataforma fue utilizada para el desarrollo de la aplicación \Guided Exercise", un sistema TIC para la rehabilitación de pacientes que han sufrido un infarto del miocardio. El sistema fue testeado en un ensayo clínico randomizado en el cual a 55 pacientes se les dio el sistema para su uso por 21 semanas. De los resultados técnicos del ensayo se puede concluir que, a pesar de algunos errores menores prontamente corregidos durante el estudio, la plataforma es estable y fiable. Conclusiones La investigación llevada a cabo en esta Tesis y los resultados obtenidos proporcionan las respuestas a las tres preguntas de investigación que motivaron este trabajo: RQ1 Se ha desarrollado un modelo para representar el dominio de los sistemas personalizados de salud. La evaluación hecha por los expertos de la rama concluye que el modelo representa el dominio con precisión y con un balance apropiado entre abstracción y detalle. RQ2 Se ha desarrollado, con éxito, una plataforma de desarrollo basada en el modelo. RQ3 Se ha demostrado que la plataforma es capaz de ayudar a los desarrolladores en la creación de software pHealth complejos. Las ventajas de la plataforma han sido demostradas en el ámbito de un proyecto de gran escala, aunque el enfoque genérico adoptado indica que la plataforma podría ofrecer beneficios también en otros contextos. Los resultados de estas evaluaciones ofrecen indicios de que, ambos, el modelo y la plataforma serán buenos candidatos para poderse convertir en una referencia para futuros desarrollos de sistemas pHealth. ABSTRACT Background Europe is living in an unsustainable situation. The economic crisis has been reducing governments' economic resources since 2008 and threatening social and health systems, while the proportion of older people in the European population continues to increase so that it is foreseen that in 2050 there will be only two workers per retiree [54]. To this situation it should be added the rise, strongly related to age, of chronic diseases the burden of which has been estimated to be up to the 7% of a country's gross domestic product [51]. There is a need for a paradigm shift, the need for a new way of caring for people's health, shifting the focus from curing conditions that have arisen to a sustainable and effective approach with the emphasis on prevention. Some advocate the adoption of personalised health care (pHealth), a model where medical practices are tailored to the patient's unique life, from the detection of risk factors to the customization of treatments based on each individual's response [81]. Personalised health is often associated to the use of Information and Communications Technology (ICT), that, with its exponential development, offers interesting opportunities for improving healthcare. The shift towards pHealth is slowly taking place, both in research and in industry, but the change is not significant yet. Many barriers still exist related to economy, politics and culture, while others are purely technological, like the lack of interoperable information systems [199]. Though interoperability aspects are evolving, there is still the need of a reference design, especially tackling implementation and large scale deployment of pHealth systems. This thesis contributes to organizing the subject of ICT systems for personalised health into a reference model that allows for the creation of software development platforms to ease common development issues in the domain. Research questions RQ1 Is it possible to define a model, based on software engineering techniques, for representing the personalised health domain in an abstract and representative way? RQ2 Is it possible to build a development platform based on this model? RQ3 Does the development platform help developers create complex integrated pHealth systems? Methods As method for describing the model, the ISO/IEC/IEEE 42010 framework [25] is adopted for its generality and high level of abstraction. The model is specified in different parts: a conceptual model, which makes use of concept maps, for representing stakeholders, artefacts and shared information, and in scenarios and use cases for the representation of the functionalities of pHealth systems. The model was derived from literature analysis, including 7 industrial and scientific reports, 9 electronic standards, 10 conference proceedings papers, 37 journal papers, 25 websites and 5 books. Based on the reference model, requirements were drawn for building the development platform enriched with a set of requirements gathered in a survey run among 11 experienced engineers. For developing the platform, the continuous integration methodology [74] was adopted which allowed to perform automatic tests on a server and also to deploy packaged releases on a web site. As a validation methodology, a theory building framework for SW engineering was adopted from [181]. The framework, chosen as a guide to find evidence for justifying the research questions, imposed the creation of theories based on models and propositions to be validated within a scope. The validation of the model was conducted as an on-line survey in three validation rounds, encompassing a growing number of participants. The survey was submitted to 134 experts of the field and on some public channels like relevant mailing lists and social networks. Its objective was to assess the model's readability, its level of coverage of the domain and its potential usefulness in the design of actual, derived systems. The questionnaires included quantitative Likert scale questions and free text inputs for comments. The development platform was validated in two scopes. As a small-scale experiment, the platform was used in a 12 hours training session where 4 developers had to perform an exercise consisting in developing a set of typical pHealth use cases At the end of the session, a focus group was held to identify benefits and drawbacks of the platform. The second validation was held as a test-case study in a large scale research project called HeartCycle the aim of which was to develop a closed-loop disease management system for heart failure and coronary heart disease patients [160]. During this project three applications were developed by a team of programmers and designers. One of these applications was tested in a clinical trial with actual patients. At the end of the project, the team was interviewed in a focus group to assess the role the platform had within the project. Results For what regards the model that describes the pHealth domain, its conceptual part includes a description of the main roles and concerns of pHealth stakeholders, a model of the ICT artefacts that are commonly adopted and a model representing the typical data that need to be formalized among pHealth systems. The functional model includes a set of 18 scenarios, divided into assisted person's view, caregiver's view, developer's view, technology and services providers' view and authority's view, and a set of 52 Use Cases grouped in 6 categories: assisted person's activities, system reactions, caregiver's activities, user engagement, developer's activities and deployer's activities. For what concerns the validation of the model, a total of 65 people participated in the online survey providing their level of agreement in all the assessed dimensions and a total of 248 comments on how to improve and complete the model. Participants' background spanned from engineering and software development (70%) to medical specialities (15%), with declared interest in the fields of eHealth (24%), mHealth (16%), Ambient Assisted Living (21%), Personalized Medicine (5%), Personal Health Systems (15%), Medical Informatics (10%) and Biomedical Engineering (8%) with an average of 7.25_4.99 years of experience in these fields. From the analysis of the answers it is possible to observe that the contacted experts considered the model easily readable (average of 1.89_0.79 being 1 the most favourable scoring and 5 the worst), sufficiently abstract (1.99_0.88) and formal (2.13_0.77) for its purpose, with a sufficient coverage of the domain (2.26_0.95), useful for describing the domain (2.02_0.7) and for generating more specific systems (2_0.75) and they reported a partial interest in using the model in their job (2.48_0.91). Thanks to their comments, the model was improved and enriched with concepts that were missing at the beginning, nonetheless it was not possible to prove an improvement among the iterations, due to the diversity of the participants in the three rounds. From the model, a development platform for the pHealth domain was generated called pHealth Patient Platform (pHPP). The platform includes a set of libraries, programming and deployment tools, a tutorial and a sample application. The main four modules of the architecture are: the Data Collection Engine, which allows abstracting sources of information like sensors or external services, mapping data to databases and ontologies, and allowing event-based interaction and filtering, the GUI Engine, which abstracts the user interface in a message-like interaction model, the Workow Engine, which allows programming the application's user interaction ows with graphical workows, and the Rule Engine, which gives developers a simple means for programming the application's logic in the form of \if-then" rules. After the 5 years experience of HeartCycle, partially programmed with pHPP, 5 developers were joined in a focus group to discuss the advantages and drawbacks of the platform. The view that emerged from the training course and the focus group was that the platform is well-suited to the needs of the engineers working in the field, it allowed the separation of concerns among the different specialities and it simplified some common development tasks like data management and asynchronous interaction. Nevertheless, some deficiencies were pointed out in terms of a lack of maturity of some technological choices, and for the absence of some domain-specific tools, e.g. for data processing or for health-related communication protocols. Within HeartCycle, the platform was used to develop part of the Guided Exercise system, a composition of ICT tools for the physical rehabilitation of patients who suffered from myocardial infarction. The system developed using the platform was tested in a randomized controlled clinical trial, in which 55 patients used the system for 21 weeks. The technical results of this trial showed that the system was stable and reliable. Some minor bugs were detected, but these were promptly corrected using the platform. This shows that the platform, as well as facilitating the development task, can be successfully used to produce reliable software. Conclusions The research work carried out in developing this thesis provides responses to the three three research questions that were the motivation for the work. RQ1 A model was developed representing the domain of personalised health systems, and the assessment of experts in the field was that it represents the domain accurately, with an appropriate balance between abstraction and detail. RQ2 A development platform based on the model was successfully developed. RQ3 The platform has been shown to assist developers create complex pHealth software. This was demonstrated within the scope of one large-scale project, but the generic approach adopted provides indications that it would offer benefits more widely. The results of these evaluations provide indications that both the model and the platform are good candidates for being a reference for future pHealth developments.
Resumo:
An accepted fact in software engineering is that software must undergo verification and validation process during development to ascertain and improve its quality level. But there are too many techniques than a single developer could master, yet, it is impossible to be certain that software is free of defects. So, it is crucial for developers to be able to choose from available evaluation techniques, the one most suitable and likely to yield optimum quality results for different products. Though, some knowledge is available on the strengths and weaknesses of the available software quality assurance techniques but not much is known yet on the relationship between different techniques and contextual behavior of the techniques. Objective: This research investigates the effectiveness of two testing techniques ? equivalence class partitioning and decision coverage and one review technique ? code review by abstraction, in terms of their fault detection capability. This will be used to strengthen the practical knowledge available on these techniques.
Resumo:
La Ingeniería del Software Empírico (ISE) utiliza como herramientas los estudios empíricos para conseguir evidencias que ayuden a conocer bajo qué circunstancias es mejor usar una tecnología software en lugar de otra. La investigación en la que se enmarca este TFM explora si las intuiciones y/o preferencias de las personas que realizan las pruebas de software, son capaces de predecir la efectividad de tres técnicas de evaluación de código: lectura por abstracciones sucesivas, cobertura de decisión y partición en clases de equivalencia. Para conseguir dicho objetivo, se analizan los datos recogidos en un estudio empírico, realizado por las tutoras de este TFM. En el estudio empírico distintos sujetos aplican las tres técnicas de evaluación de código a tres programas distintos, a los que se les habían introducido una serie de faltas artificialmente. Los sujetos deben reportar los fallos encontrados en los programas, así como, contestar a una serie de preguntas sobre sus intuiciones y preferencias. A la hora de analizar los datos del estudio, se ha comprobado: 1) cuáles son sus intuiciones y preferencias (mediante el test estadístico X2 de Pearson); 2) si los sujetos cambian de opinión después de aplicar las técnicas (para ello se ha utilizado índice de Kappa, el Test de McNemar-Bowker y el Test de Stuart-Maxwell); 3) la consistencia de las distintas preguntas (mediante el índice de Kappa), comparando: intuiciones con intuiciones, preferencias con preferencias e intuiciones con preferencias; 4) Por último, si hay coincidencia entre las intuiciones y preferencias con la efectividad real obtenida (para ello se ha utilizado, el Modelo Lineal General con medidas repetidas). Los resultados muestran que, no hay una intuición clara ni tampoco una preferencia concreta, con respecto a los programas. Además aunque existen cambios de opinión después de aplicar las técnicas, no se encuentran evidencias claras para afirmar que la intuición y preferencias influyen en su efectividad. Finalmente, existen relaciones entre las intuiciones con intuiciones, preferencias con preferencias e intuiciones con preferencias, además esta relación es más notoria después de aplicar las técnicas. ----ABSTRACT----Empirical Software Engineering (ESE) uses empirical studies as a mean to generate evidences to help determine under what circumstances it is convenient to use a given software technology. This Master Thesis is part of a research that explores whether intuitions and/or preferences of testers, can be used to predict the effectiveness of three code evaluation techniques: reading by stepwise abstractions, decision coverage and equivalence partitioning. To achieve this goal, this Master Thesis analyzes the data collected in an empirical study run by the tutors. In the empirical study, different subjects apply three code evaluation techniques to three different programs. A series of faults were artificially introduced to the programs. Subjects are required to report the defects found in the programs, as well as answer a series of questions about their intuitions and preferences. The data analyses test: 1) what are the intuitions and preferences of the subjects (using the Pearson X2 test); 2) whether subjects change their minds after applying the techniques (using the Kappa coefficient, McNemar-Bowker test, and Stuart-Maxwell test); 3) the consistency of the different questions, comparing: intuitions versus intuitions, preferences versus preferences and preferences versus intuitions (using the Kappa coefficient); 4) finally, if intuitions and/or preferences predict the actual effectiveness obtained (using the General Linear Model, repeated measures). The results show that there is not clear intuition or particular preference with respect to the programs. Moreover, although there are changes of mind after applying the techniques, there are not clear evidences to claim that intuition and preferences influence their effectiveness. Finally, there is a relationship between the intuitions versus intuitions, preferences versus preferences and intuitions versus preferences; this relationship is more noticeable after applying the techniques.
Resumo:
La Ingeniería del Software (IS) Empírica adopta el método científico a la IS para facilitar la generación de conocimiento. Una de las técnicas empleadas, es la realización de experimentos. Para que el conocimiento obtenido experimentalmente adquiera el nivel de madurez necesario para su posterior uso, es necesario que los experimentos sean replicados. La existencia de múltiples replicaciones de un mismo experimento conlleva la existencia de numerosas versiones de los distintos productos generados durante la realización de cada replicación. Actualmente existe un gran descontrol sobre estos productos, ya que la administración se realiza de manera informal. Esto causa problemas a la hora de planificar nuevas replicaciones, o intentar obtener información sobre las replicaciones ya realizadas. Para conocer con detalle la dimensión del problema a resolver, se estudia el estado actual de la gestión de materiales experimentales y su uso en replicaciones, así como de las herramientas de gestión de materiales experimentales. El estudio concluye que ninguno de los enfoques estudiados proporciona una solución al problema planteado. Este trabajo persigue como objetivo mejorar la administración de los materiales experimentales y replicaciones de experimentos en IS para dar soporte a la replicación de experimentos. Para satisfacer este objetivo, se propone la adopción en experimentación de los paradigmas de Gestión de Configuración del Software (GCS) y Línea de Producto Software (LPS). Para desarrollar la propuesta se decide utilizar el método de investigación acción (en inglés action research). Para adoptar la GCS a experimentación, se comienza realizando un estudio del proceso experimental como transformación de productos; a continuación, se realiza una adopción de conceptos fundamentada en los procesos del desarrollo software y de experimentación; finalmente, se desarrollan un conjunto de instrumentos, que se incorporan a un Plan de Gestión de Configuración de Experimentos (PGCE). Para adoptar la LPS a experimentación, se comienza realizando un estudio de los conceptos, actividades y fases que fundamentan la LPS; a continuación, se realiza una adopción de los conceptos; finalmente, se desarrollan o adoptan las técnicas, simbología y modelos para dar soporte a las fases de la Línea de Producto para Experimentación (LPE). La propuesta se valida mediante la evaluación de su: viabilidad, flexibilidad, usabilidad y satisfacción. La viabilidad y flexibilidad se evalúan mediante la instanciación del PGCE y de la LPE en experimentos concretos en IS. La usabilidad se evalúa mediante el uso de la propuesta para la generación de las instancias del PGCE y de LPE. La satisfacción evalúa la información sobre el experimento que contiene el PGCE y la LPE. Los resultados de la validación de la propuesta muestran mejores resultados en los aspectos de usabilidad y satisfacción a los experimentadores. ABSTRACT Empirical software engineering adapts the scientific method to software engineering (SE) in order to facilitate knowledge generation. Experimentation is one of the techniques used. For the knowledge generated experimentally to acquire the level of maturity necessary for later use, the experiments have to be replicated. As the same experiment is replicated more than once, there are numerous versions of all the products generated during a replication. These products are generally administered informally without control. This is troublesome when it comes to planning new replications or trying to gather information on replications conducted in the past. In order to grasp the size of the problem to be solved, this research examines the current state of the art of the management and use of experimental materials in replications, as well as the tools managing experimental materials. The study concludes that none of the analysed approaches provides a solution to the stated problem. The aim of this research is to improve the administration of SE experimental materials and experimental replications in support of experiment replication. To do this, we propose the adaptation of software configuration management (SCM) and software product line (SPL) paradigms to experimentation. The action research method was selected in order to develop this proposal. The first step in the adaptation of the SCM to experimentation was to analyse the experimental process from the viewpoint of the transformation of products. The concepts were then adapted based on software development and experimentation processes. Finally, a set of instruments were developed and added to an experiment configuration management plan (ECMP). The first step in the adaptation of the SPL to experimentation is to analyse the concepts, activities and phases underlying the SPL. The concepts are then adapted. Finally, techniques, symbols and models are developed or adapted in support of the experimentation product line (EPL) phases. The proposal is validated by evaluating its feasibility, flexibility, usability and satisfaction. Feasibility and flexibility are evaluated by instantiating the ECMP and the EPL in specific SE experiments. Usability is evaluated by using the proposal to generate the instances of the ECMP and EPL. The results of the validation of the proposal show that the proposal performs better with respect to usability issues and experimenter satisfaction.
Resumo:
Antecedentes: Esta investigación se enmarca principalmente en la replicación y secundariamente en la síntesis de experimentos en Ingeniería de Software (IS). Para poder replicar, es necesario disponer de todos los detalles del experimento original. Sin embargo, la descripción de los experimentos es habitualmente incompleta debido a la existencia de conocimiento tácito y a la existencia de otros problemas tales como: La carencia de un formato estándar de reporte, la inexistencia de herramientas que den soporte a la generación de reportes experimentales, etc. Esto provoca que no se pueda reproducir fielmente el experimento original. Esta problemática limita considerablemente la capacidad de los experimentadores para llevar a cabo replicaciones y por ende síntesis de experimentos. Objetivo: La investigación tiene como objetivo formalizar el proceso experimental en IS, de modo que facilite la comunicación de información entre experimentadores. Contexto: El presente trabajo de tesis doctoral ha sido desarrollado en el seno del Grupo de Investigación en Ingeniería del Software Empírica (GrISE) perteneciente a la Escuela Técnica Superior de Ingenieros Informáticos (ETSIINF) de la Universidad Politécnica de Madrid (UPM), como parte del proyecto TIN2011-23216 denominado “Tecnologías para la Replicación y Síntesis de Experimentos en Ingeniería de Software”, el cual es financiado por el Gobierno de España. El grupo GrISE cumple a la perfección con los requisitos necesarios (familia de experimentos establecida, con al menos tres líneas experimentales y una amplia experiencia en replicaciones (16 replicaciones hasta 2011 en la línea de técnicas de pruebas de software)) y ofrece las condiciones para que la investigación se lleve a cabo de la mejor manera, como por ejemplo, el acceso total a su información. Método de Investigación: Para cumplir este objetivo se opta por Action Research (AR) como el método de investigación más adecuado a las características de la investigación, para obtener resultados a través de aproximaciones sucesivas que abordan los problemas concretos de comunicación entre experimentadores. Resultados: Se formalizó el modelo conceptual del ciclo experimental desde la perspectiva de los 3 roles principales que representan los experimentadores en el proceso experimental, siendo estos: Gestor de la Investigación (GI), Gestor del Experimento (GE) y Experimentador Senior (ES). Por otra parte, se formalizó el modelo del ciclo experimental, a través de: Un workflow del ciclo y un diagrama de procesos. Paralelamente a la formalización del proceso experimental en IS, se desarrolló ISRE (de las siglas en inglés Infrastructure for Sharing and Replicating Experiments), una prueba de concepto de entorno de soporte a la experimentación en IS. Finalmente, se plantearon guías para el desarrollo de entornos de soporte a la experimentación en IS, en base al estudio de las características principales y comunes de los modelos de las herramientas de soporte a la experimentación en distintas disciplinas experimentales. Conclusiones: La principal contribución de la investigación esta representada por la formalización del proceso experimental en IS. Los modelos que representan la formalización del ciclo experimental, así como la herramienta ISRE, construida a modo de evaluación de los modelos, fueron encontrados satisfactorios por los experimentadores del GrISE. Para consolidar la validez de la formalización, consideramos que este estudio debería ser replicado en otros grupos de investigación representativos en la comunidad de la IS experimental. Futuras Líneas de Investigación: El cumplimiento de los objetivos, de la mano con los hallazgos alcanzados, han dado paso a nuevas líneas de investigación, las cuales son las siguientes: (1) Considerar la construcción de un mecanismo para facilitar el proceso de hacer explícito el conocimiento tácito de los experimentadores por si mismos de forma colaborativa y basados en el debate y el consenso , (2) Continuar la investigación empírica en el mismo grupo de investigación hasta cubrir completamente el ciclo experimental (por ejemplo: experimentos nuevos, síntesis de resultados, etc.), (3) Replicar el proceso de investigación en otros grupos de investigación en ISE, y (4) Renovar la tecnología de la prueba de concepto, tal que responda a las restricciones y necesidades de un entorno real de investigación. ABSTRACT Background: This research addresses first and foremost the replication and also the synthesis of software engineering (SE) experiments. Replication is impossible without access to all the details of the original experiment. But the description of experiments is usually incomplete because knowledge is tacit, there is no standard reporting format or there are hardly any tools to support the generation of experimental reports, etc. This means that the original experiment cannot be reproduced exactly. These issues place considerable constraints on experimenters’ options for carrying out replications and ultimately synthesizing experiments. Aim: The aim of the research is to formalize the SE experimental process in order to facilitate information communication among experimenters. Context: This PhD research was developed within the empirical software engineering research group (GrISE) at the Universidad Politécnica de Madrid (UPM)’s School of Computer Engineering (ETSIINF) as part of project TIN2011-23216 entitled “Technologies for Software Engineering Experiment Replication and Synthesis”, which was funded by the Spanish Government. The GrISE research group fulfils all the requirements (established family of experiments with at least three experimental lines and lengthy replication experience (16 replications prior to 2011 in the software testing techniques line)) and provides favourable conditions for the research to be conducted in the best possible way, like, for example, full access to information. Research Method: We opted for action research (AR) as the research method best suited to the characteristics of the investigation. Results were generated successive rounds of AR addressing specific communication problems among experimenters. Results: The conceptual model of the experimental cycle was formalized from the viewpoint of three key roles representing experimenters in the experimental process. They were: research manager, experiment manager and senior experimenter. The model of the experimental cycle was formalized by means of a workflow and a process diagram. In tandem with the formalization of the SE experimental process, infrastructure for sharing and replicating experiments (ISRE) was developed. ISRE is a proof of concept of a SE experimentation support environment. Finally, guidelines for developing SE experimentation support environments were designed based on the study of the key features that the models of experimentation support tools for different experimental disciplines had in common. Conclusions: The key contribution of this research is the formalization of the SE experimental process. GrISE experimenters were satisfied with both the models representing the formalization of the experimental cycle and the ISRE tool built in order to evaluate the models. In order to further validate the formalization, this study should be replicated at other research groups representative of the experimental SE community. Future Research Lines: The achievement of the aims and the resulting findings have led to new research lines, which are as follows: (1) assess the feasibility of building a mechanism to help experimenters collaboratively specify tacit knowledge based on debate and consensus, (2) continue empirical research at the same research group in order to cover the remainder of the experimental cycle (for example, new experiments, results synthesis, etc.), (3) replicate the research process at other ESE research groups, and (4) update the tools of the proof of concept in order to meet the constraints and needs of a real research environment.
Resumo:
The new degrees in Spanish universities generated as a result of the Bologna process, stress a new dimension: the generic competencies to be acquired by university students (leadership, problem solving, respect for the environment, etc.). At Universidad Polite¿cnica de Madrid a teaching model was defined for two degrees: Graduate in Computer Engineering and Graduate in Software Engineering. Such model incorporates the training, development and assessment of generic competencies planned in these curricula. The aim of this paper is to describe how this model was implemented in both degrees. The model has three components. The first refers to a set of seven activities for introducing mechanisms for training, development and assessment of generic competencies. The second component aims to coordinate actions that implement the competencies across courses (in space and time). The third component consists of a series of activities to perform quality control. The implementation of generic competencies was carried out in first year courses (first and second semesters), together with the planning for second year courses (third and fourth semesters). We managed to involve a high percentage of first-year courses (80%) and the contacts that have been initiated suggest a high percentage in the second year as well.
Resumo:
In this paper we want to point out, by means of a case study, the importance of incorporating some knowledge engineering techniques to the processes of software engineering. Precisely, we are referring to the knowledge eduction techniques. We know the difficulty of requirements acquisition and its importance to minimise the risks of a software project, both in the development phase and in the maintenance phase. To capture the functional requirements use cases are generally used. However, as we will show in this paper, this technique is insufficient when the problem domain knowledge is only in the "experts? mind". In this situation, the combination of the use case with eduction techniques, in every development phase, will let us to discover the correct requirements.
Resumo:
This document contains detailed description of the design and the implementation of a multi-agent application controlling traffic lights in a city together with a system for simulating traffic and testing. The goal of this thesis is to design and build a simplified intelligent and distributed solution to the problem with the traffic in the big cities following different good practices in order to allow future refining of the model of the real world. The problem of the traffic in the big cities is still a problem that cannot be solved. Not only is the increasing number of cars a reason for the traffic jams, but also the way the traffic is organized. Usually, the intersections with traffic lights are replaced by roundabouts or interchanges to increase the number of cars that can cross the intersection in certain time. But still there are places where the infrastructure cannot be changed and the traffic light semaphores are the only way to control the car flows. In real life, the traffic lights have a predefined plan for change or they receive information from a centralized system when and how they have to change. But what if the traffic lights can cooperate and decide on their own when and how to change? Using this problem, the purpose of the thesis is to explore different agent-based software engineering approaches to design and build a non-conventional distributed system. From the software engineering point of view, the goal of the thesis is to apply the knowledge and use the skills, acquired during the various courses of the master program in Software Engineering, while solving a practical and complex problem such as the traffic in the cities.