93 resultados para 280302 Software Engineering
Resumo:
The concept of service oriented architecture has been extensively explored in software engineering, due to the fact that it produces architectures made up of several interconnected modules, easy to reuse when building new systems. This approach to design would be impossible without interconnection mechanisms such as REST (Representationa State Transfer) services, which allow module communication while minimizing coupling. . However, this low coupling brings disadvantages, such as the lack of transparency, which makes it difficult to sistematically create tests without knowledge of the inner working of a system. In this article, we present an automatic error detection system for REST services, based on a statistical analysis over responses produced at multiple service invocations. Thus, a service can be systematically tested without knowing its full specification. The method can find errors in REST services which could not be identified by means of traditional testing methods, and provides limited testing coverage for services whose response format is unknown. It can be also useful as a complement to other testing mechanisms.
Resumo:
The verification and validation activity plays a fundamental role in improving software quality. Determining which the most effective techniques for carrying out this activity are has been an aspiration of experimental software engineering researchers for years. This paper reports a controlled experiment evaluating the effectiveness of two unit testing techniques (the functional testing technique known as equivalence partitioning (EP) and the control-flow structural testing technique known as branch testing (BT)). This experiment is a literal replication of Juristo et al. (2013). Both experiments serve the purpose of determining whether the effectiveness of BT and EP varies depending on whether or not the faults are visible for the technique (InScope or OutScope, respectively). We have used the materials, design and procedures of the original experiment, but in order to adapt the experiment to the context we have: (1) reduced the number of studied techniques from 3 to 2; (2) assigned subjects to experimental groups by means of stratified randomization to balance the influence of programming experience; (3) localized the experimental materials and (4) adapted the training duration. We ran the replication at the Escuela Polite?cnica del Eje?rcito Sede Latacunga (ESPEL) as part of a software verification & validation course. The experimental subjects were 23 master?s degree students. EP is more effective than BT at detecting InScope faults. The session/program and group variables are found to have significant effects. BT is more effective than EP at detecting OutScope faults. The session/program and group variables have no effect in this case. The results of the replication and the original experiment are similar with respect to testing techniques. There are some inconsistencies with respect to the group factor. They can be explained by small sample effects. The results for the session/program factor are inconsistent for InScope faults. We believe that these differences are due to a combination of the fatigue effect and a technique x program interaction. Although we were able to reproduce the main effects, the changes to the design of the original experiment make it impossible to identify the causes of the discrepancies for sure. We believe that further replications closely resembling the original experiment should be conducted to improve our understanding of the phenomena under study.
Resumo:
Although the aim of empirical software engineering is to provide evidence for selecting the appropriate technology, it appears that there is a lack of recognition of this work in industry. Results from empirical research only rarely seem to find their way to company decision makers. If information relevant for software managers is provided in reports on experiments, such reports can be considered as a source of information for them when they are faced with making decisions about the selection of software engineering technologies. To bridge this communication gap between researchers and professionals, we propose characterizing the information needs of software managers in order to show empirical software engineering researchers which information is relevant for decision-making and thus enable them to make this information available. We empirically investigated decision makers? information needs to identify which information they need to judge the appropriateness and impact of a software technology. We empirically developed a model that characterizes these needs. To ensure that researchers provide relevant information when reporting results from experiments, we extended existing reporting guidelines accordingly. We performed an experiment to evaluate our model with regard to its effectiveness. Software managers who read an experiment report according to the proposed model judged the technology?s appropriateness significantly better than those reading a report about the same experiment that did not explicitly address their information needs. Our research shows that information regarding a technology, the context in which it is supposed to work, and most importantly, the impact of this technology on development costs and schedule as well as on product quality is crucial for decision makers.
Resumo:
The backdrop of actual problematic about the implementation of Information Technology (IT) services management in Small and Medium Enterprises (SMEs) will be described. It will be exposed the reasons why reaching a maturity/capability level through well-known standards or the implementation of good software engineering practices by means of IT infrastructure Library are really difficult to achieve by SMEs. Also, the solutions to the exposed problems will be explained. Also master thesis goals are presented in terms of: purpose, research questions, research goals, objectives and scope. Finally, thesis structure is described.
Resumo:
This document contains detailed description of the design and the implementation of a multi-agent application controlling traffic lights in a city together with a system for simulating traffic and testing. The goal of this thesis is to design and build a simplified intelligent and distributed solution to the problem with the traffic in the big cities following different good practices in order to allow future refining of the model of the real world. The problem of the traffic in the big cities is still a problem that cannot be solved. Not only is the increasing number of cars a reason for the traffic jams, but also the way the traffic is organized. Usually, the intersections with traffic lights are replaced by roundabouts or interchanges to increase the number of cars that can cross the intersection in certain time. But still there are places where the infrastructure cannot be changed and the traffic light semaphores are the only way to control the car flows. In real life, the traffic lights have a predefined plan for change or they receive information from a centralized system when and how they have to change. But what if the traffic lights can cooperate and decide on their own when and how to change? Using this problem, the purpose of the thesis is to explore different agent-based software engineering approaches to design and build a non-conventional distributed system. From the software engineering point of view, the goal of the thesis is to apply the knowledge and use the skills, acquired during the various courses of the master program in Software Engineering, while solving a practical and complex problem such as the traffic in the cities.
Resumo:
In this paper the first stage of implementation of digital content is presented through a website, that includes OS issues. Currently students at the Technological University of the Valle of Toluca (UTVT) represents a high degree of learning difficulty in this area. Then academic staff develops this project, in collaboration with the Technological University of the Suroeste of Guanajuato (UTSOE), which aims to improve the teaching-learning and make teaching practices incorporating TIC provide students with a tool to promote their interest. Through the interactivity offered by the web, the student to achieve strengthening further their knowledge and actively participate, which has easy access to information, where distance and time is not a constraint, in addition to have an effective view of the concepts needed to acquire the skills and the ability to self-regulate their learning. For the development of project is used methodologies of the software engineering and technologies web for design.
Resumo:
Empirical Software Engineering (ESE) replication researchers need to store and manipulate experimental data for several purposes, in particular analysis and reporting. Current research needs call for sharing and preservation of experimental data as well. In a previous work, we analyzed Replication Data Management (RDM) needs. A novel concept, called Experimental Ecosystem, was proposed to solve current deficiencies in RDMapproaches. The empirical ecosystem provides replication researchers with a common framework that integrates transparently local heterogeneous data sources. A typical situation where the Empirical Ecosystem is applicable, is when several members of a research group, or several research groups collaborating together, need to share and access each other experimental results. However, to be able to apply the Empirical Ecosystem concept and deliver all promised benefits, it is necessary to analyze the software architectures and tools that can properly support it.
Resumo:
Context: The software engineering community is becoming more aware of the need for experimental replications. In spite of the importance of this topic, there is still much inconsistency in the terminology used to describe replications. Objective: Understand the perspectives of empirical researchers about various terms used to characterize replications and propose a consistent taxonomy of terms. Method: A survey followed by plenary discussion during the 2013 International Software Engineering Research Network meeting. Results: We propose a taxonomy which consolidates the disparate terminology. This taxonomy had a high level of agreement among workshop attendees. Conclusion: Consistent terminology is important for any field to progress. This work is the first step in that direction. Additional study and discussion is still necessary.
Resumo:
Keynote speech about experimental design
Resumo:
Context. This thesis is framed in experimental software engineering. More concretely, it addresses the problems arisen when assessing process conformance in test-driven development experiments conducted by UPM's Experimental Software Engineering group. Process conformance was studied using the Eclipse's plug-in tool Besouro. It has been observed that Besouro does not work correctly in some circumstances. It creates doubts about the correction of the existing experimental data which render it useless. Aim. The main objective of this work is the identification and correction of Besouro's faults. A secondary goal is fixing the datasets already obtained in past experiments to the maximum possible extent. This way, existing experimental results could be used with confidence. Method. (1) Testing Besouro using different sequences of events (creation methods, assertions etc..) to identify the underlying faults. (2) Fix the code and (3) fix the datasets using code specially created for this purpose. Results. (1) We confirmed the existence of several fault in Besouro's code that affected to Test-First and Test-Last episode identification. These faults caused the incorrect identification of 20% of episodes. (2) We were able to fix Besouro's code. (3) The correction of existing datasets was possible, subjected to some restrictions (such us the impossibility of tracing code size increase to programming time. Conclusion. The results of past experiments dependent upon Besouro's data could no be trustable. We have the suspicion that more faults remain in Besouro's code, whose identification requires further analysis.
Resumo:
El auge del "Internet de las Cosas" (IoT, "Internet of Things") y sus tecnologías asociadas han permitido su aplicación en diversos dominios de la aplicación, entre los que se encuentran la monitorización de ecosistemas forestales, la gestión de catástrofes y emergencias, la domótica, la automatización industrial, los servicios para ciudades inteligentes, la eficiencia energética de edificios, la detección de intrusos, la gestión de desastres y emergencias o la monitorización de señales corporales, entre muchas otras. La desventaja de una red IoT es que una vez desplegada, ésta queda desatendida, es decir queda sujeta, entre otras cosas, a condiciones climáticas cambiantes y expuestas a catástrofes naturales, fallos de software o hardware, o ataques maliciosos de terceros, por lo que se puede considerar que dichas redes son propensas a fallos. El principal requisito de los nodos constituyentes de una red IoT es que estos deben ser capaces de seguir funcionando a pesar de sufrir errores en el propio sistema. La capacidad de la red para recuperarse ante fallos internos y externos inesperados es lo que se conoce actualmente como "Resiliencia" de la red. Por tanto, a la hora de diseñar y desplegar aplicaciones o servicios para IoT, se espera que la red sea tolerante a fallos, que sea auto-configurable, auto-adaptable, auto-optimizable con respecto a nuevas condiciones que puedan aparecer durante su ejecución. Esto lleva al análisis de un problema fundamental en el estudio de las redes IoT, el problema de la "Conectividad". Se dice que una red está conectada si todo par de nodos en la red son capaces de encontrar al menos un camino de comunicación entre ambos. Sin embargo, la red puede desconectarse debido a varias razones, como que se agote la batería, que un nodo sea destruido, etc. Por tanto, se hace necesario gestionar la resiliencia de la red con el objeto de mantener la conectividad entre sus nodos, de tal manera que cada nodo IoT sea capaz de proveer servicios continuos, a otros nodos, a otras redes o, a otros servicios y aplicaciones. En este contexto, el objetivo principal de esta tesis doctoral se centra en el estudio del problema de conectividad IoT, más concretamente en el desarrollo de modelos para el análisis y gestión de la Resiliencia, llevado a la práctica a través de las redes WSN, con el fin de mejorar la capacidad la tolerancia a fallos de los nodos que componen la red. Este reto se aborda teniendo en cuenta dos enfoques distintos, por una parte, a diferencia de otro tipo de redes de dispositivos convencionales, los nodos en una red IoT son propensos a perder la conexión, debido a que se despliegan en entornos aislados, o en entornos con condiciones extremas; por otra parte, los nodos suelen ser recursos con bajas capacidades en términos de procesamiento, almacenamiento y batería, entre otros, por lo que requiere que el diseño de la gestión de su resiliencia sea ligero, distribuido y energéticamente eficiente. En este sentido, esta tesis desarrolla técnicas auto-adaptativas que permiten a una red IoT, desde la perspectiva del control de su topología, ser resiliente ante fallos en sus nodos. Para ello, se utilizan técnicas basadas en lógica difusa y técnicas de control proporcional, integral y derivativa (PID - "proportional-integral-derivative"), con el objeto de mejorar la conectividad de la red, teniendo en cuenta que el consumo de energía debe preservarse tanto como sea posible. De igual manera, se ha tenido en cuenta que el algoritmo de control debe ser distribuido debido a que, en general, los enfoques centralizados no suelen ser factibles a despliegues a gran escala. El presente trabajo de tesis implica varios retos que conciernen a la conectividad de red, entre los que se incluyen: la creación y el análisis de modelos matemáticos que describan la red, una propuesta de sistema de control auto-adaptativo en respuesta a fallos en los nodos, la optimización de los parámetros del sistema de control, la validación mediante una implementación siguiendo un enfoque de ingeniería del software y finalmente la evaluación en una aplicación real. Atendiendo a los retos anteriormente mencionados, el presente trabajo justifica, mediante una análisis matemático, la relación existente entre el "grado de un nodo" (definido como el número de nodos en la vecindad del nodo en cuestión) y la conectividad de la red, y prueba la eficacia de varios tipos de controladores que permiten ajustar la potencia de trasmisión de los nodos de red en respuesta a eventuales fallos, teniendo en cuenta el consumo de energía como parte de los objetivos de control. Así mismo, este trabajo realiza una evaluación y comparación con otros algoritmos representativos; en donde se demuestra que el enfoque desarrollado es más tolerante a fallos aleatorios en los nodos de la red, así como en su eficiencia energética. Adicionalmente, el uso de algoritmos bioinspirados ha permitido la optimización de los parámetros de control de redes dinámicas de gran tamaño. Con respecto a la implementación en un sistema real, se han integrado las propuestas de esta tesis en un modelo de programación OSGi ("Open Services Gateway Initiative") con el objeto de crear un middleware auto-adaptativo que mejore la gestión de la resiliencia, especialmente la reconfiguración en tiempo de ejecución de componentes software cuando se ha producido un fallo. Como conclusión, los resultados de esta tesis doctoral contribuyen a la investigación teórica y, a la aplicación práctica del control resiliente de la topología en redes distribuidas de gran tamaño. Los diseños y algoritmos presentados pueden ser vistos como una prueba novedosa de algunas técnicas para la próxima era de IoT. A continuación, se enuncian de forma resumida las principales contribuciones de esta tesis: (1) Se han analizado matemáticamente propiedades relacionadas con la conectividad de la red. Se estudia, por ejemplo, cómo varía la probabilidad de conexión de la red al modificar el alcance de comunicación de los nodos, así como cuál es el mínimo número de nodos que hay que añadir al sistema desconectado para su re-conexión. (2) Se han propuesto sistemas de control basados en lógica difusa para alcanzar el grado de los nodos deseado, manteniendo la conectividad completa de la red. Se han evaluado diferentes tipos de controladores basados en lógica difusa mediante simulaciones, y los resultados se han comparado con otros algoritmos representativos. (3) Se ha investigado más a fondo, dando un enfoque más simple y aplicable, el sistema de control de doble bucle, y sus parámetros de control se han optimizado empleando algoritmos heurísticos como el método de la entropía cruzada (CE, "Cross Entropy"), la optimización por enjambre de partículas (PSO, "Particle Swarm Optimization"), y la evolución diferencial (DE, "Differential Evolution"). (4) Se han evaluado mediante simulación, la mayoría de los diseños aquí presentados; además, parte de los trabajos se han implementado y validado en una aplicación real combinando técnicas de software auto-adaptativo, como por ejemplo las de una arquitectura orientada a servicios (SOA, "Service-Oriented Architecture"). ABSTRACT The advent of the Internet of Things (IoT) enables a tremendous number of applications, such as forest monitoring, disaster management, home automation, factory automation, smart city, etc. However, various kinds of unexpected disturbances may cause node failure in the IoT, for example battery depletion, software/hardware malfunction issues and malicious attacks. So, it can be considered that the IoT is prone to failure. The ability of the network to recover from unexpected internal and external failures is known as "resilience" of the network. Resilience usually serves as an important non-functional requirement when designing IoT, which can further be broken down into "self-*" properties, such as self-adaptive, self-healing, self-configuring, self-optimization, etc. One of the consequences that node failure brings to the IoT is that some nodes may be disconnected from others, such that they are not capable of providing continuous services for other nodes, networks, and applications. In this sense, the main objective of this dissertation focuses on the IoT connectivity problem. A network is regarded as connected if any pair of different nodes can communicate with each other either directly or via a limited number of intermediate nodes. More specifically, this thesis focuses on the development of models for analysis and management of resilience, implemented through the Wireless Sensor Networks (WSNs), which is a challenging task. On the one hand, unlike other conventional network devices, nodes in the IoT are more likely to be disconnected from each other due to their deployment in a hostile or isolated environment. On the other hand, nodes are resource-constrained in terms of limited processing capability, storage and battery capacity, which requires that the design of the resilience management for IoT has to be lightweight, distributed and energy-efficient. In this context, the thesis presents self-adaptive techniques for IoT, with the aim of making the IoT resilient against node failures from the network topology control point of view. The fuzzy-logic and proportional-integral-derivative (PID) control techniques are leveraged to improve the network connectivity of the IoT in response to node failures, meanwhile taking into consideration that energy consumption must be preserved as much as possible. The control algorithm itself is designed to be distributed, because the centralized approaches are usually not feasible in large scale IoT deployments. The thesis involves various aspects concerning network connectivity, including: creation and analysis of mathematical models describing the network, proposing self-adaptive control systems in response to node failures, control system parameter optimization, implementation using the software engineering approach, and evaluation in a real application. This thesis also justifies the relations between the "node degree" (the number of neighbor(s) of a node) and network connectivity through mathematic analysis, and proves the effectiveness of various types of controllers that can adjust power transmission of the IoT nodes in response to node failures. The controllers also take into consideration the energy consumption as part of the control goals. The evaluation is performed and comparison is made with other representative algorithms. The simulation results show that the proposals in this thesis can tolerate more random node failures and save more energy when compared with those representative algorithms. Additionally, the simulations demonstrate that the use of the bio-inspired algorithms allows optimizing the parameters of the controller. With respect to the implementation in a real system, the programming model called OSGi (Open Service Gateway Initiative) is integrated with the proposals in order to create a self-adaptive middleware, especially reconfiguring the software components at runtime when failures occur. The outcomes of this thesis contribute to theoretic research and practical applications of resilient topology control for large and distributed networks. The presented controller designs and optimization algorithms can be viewed as novel trials of the control and optimization techniques for the coming era of the IoT. The contributions of this thesis can be summarized as follows: (1) Mathematically, the fault-tolerant probability of a large-scale stochastic network is analyzed. It is studied how the probability of network connectivity depends on the communication range of the nodes, and what is the minimum number of neighbors to be added for network re-connection. (2) A fuzzy-logic control system is proposed, which obtains the desired node degree and in turn maintains the network connectivity when it is subject to node failures. There are different types of fuzzy-logic controllers evaluated by simulations, and the results demonstrate the improvement of fault-tolerant capability as compared to some other representative algorithms. (3) A simpler but more applicable approach, the two-loop control system is further investigated, and its control parameters are optimized by using some heuristic algorithms such as Cross Entropy (CE), Particle Swarm Optimization (PSO), and Differential Evolution (DE). (4) Most of the designs are evaluated by means of simulations, but part of the proposals are implemented and tested in a real-world application by combining the self-adaptive software technique and the control algorithms which are presented in this thesis.
Resumo:
Context: Measurement is crucial and important to empirical software engineering. Although reliability and validity are two important properties warranting consideration in measurement processes, they may be influenced by random or systematic error (bias) depending on which metric is used. Aim: Check whether, the simple subjective metrics used in empirical software engineering studies are prone to bias. Method: Comparison of the reliability of a family of empirical studies on requirements elicitation that explore the same phenomenon using different design types and objective and subjective metrics. Results: The objectively measured variables (experience and knowledge) tend to achieve more reliable results, whereas subjective metrics using Likert scales (expertise and familiarity) tend to be influenced by systematic error or bias. Conclusions: Studies that predominantly use variables measured subjectively, like opinion polls or expert opinion acquisition.
Resumo:
Context: Empirical Software Engineering (ESE) replication researchers need to store and manipulate experimental data for several purposes, in particular analysis and reporting. Current research needs call for sharing and preservation of experimental data as well. In a previous work, we analyzed Replication Data Management (RDM) needs. A novel concept, called Experimental Ecosystem, was proposed to solve current deficiencies in RDM approaches. The empirical ecosystem provides replication researchers with a common framework that integrates transparently local heterogeneous data sources. A typical situation where the Empirical Ecosystem is applicable, is when several members of a research group, or several research groups collaborating together, need to share and access each other experimental results. However, to be able to apply the Empirical Ecosystem concept and deliver all promised benefits, it is necessary to analyze the software architectures and tools that can properly support it.
Resumo:
This paper shares our experience with initial negotiation and topic elicitation process for conducting industry experiments in six software development organizations in Finland. The process involved interaction with company representatives in the form of both multiple group discussions and separate face-to-face meetings. Fitness criteria developed by researchers were applied to the list of generated topics to decide on a common topic. The challenges we faced include diversity of proposed topics, communication gaps, skepticism about research methods, initial disconnect between research and industry needs, and lack of prior work relationship. Lessons learned include having enough time to establish trust with partners, importance of leveraging the benefits of training and skill development that are inherent in the experimental approach, uniquely positioning the experimental approach within the landscape of other validation approaches more familiar to industrial partners, and introducing the fitness criteria early in the process.
Resumo:
Historically, teachers have always searched for a connection with their students to make education interesting and a vital experience. In the 19th century, pedagogue Johann Heinrich Pestalozzi taught children how to sum using wood blocks. His successors have followed his legacy and today they use a wide variety of media, including board games, in order to reach out to their students. These methods are denominated educational technologies, which are defined as the study and ethical practice of facilitating learning and improving performance by creating, using, and managing appropriate technological processes and resources. With the advent of the information technologies, teachers have at their disposal new media with which they can increase the interest of their students. This technologic revolution is changing the present educational model. The objective of this dissertation is to develop an educational videogame in order to help students learn mathematics. To reach this goal, the videogame has been developed with the game engine Unity as the main tool. Additionally, agile software development methodologies as well as other software engineering techniques have also been used. The result is Riskmatica, an educational videogame based on geographical domination in which knowledge is the best weapon. The players must conquer enemy teritories answering correctly a mathecatical question. Moreover the videogame has the functionality required to configure a new game and input new questions. To conclude, this project has created an educational technology which greatly appeals to students and that can be used by the educators to improve their lessons in mathematics.---RESUMEN---A lo largo de la historia, los educadores siempre han buscado conectar con los alumnos para poder captar su interés y hacer que la educación se convierta en una experiencia vital. El pedagogo Johann Heinrich Pestalozzi conseguía esto en el siglo XIX, enseñando a niños a contar con bloques de madera. Sus sucesores han seguido su legado y hoy en día utilizan variedad de medios con los que motivar a sus alumnos, en algunos casos los juegos de mesa. Estos métodos son denominados tecnologías educativas, que se definen como los estudios y prácticas éticas que facilitan y mejoran la enseñanza, mediante la creación, el uso y el empleo de procesos y recursos tecnológicos. Con el advenimiento de las tecnologías de la información, los educadores tienen a su disposición un nuevo medio con el que llegar al alumnado. Esta revolución tecnológica está cambiando el modelo educativo actual. El objetivo de este proyecto es el de crear un videojuego educativo que ayude a los alumnos a estudiar matemáticas. Para lograrlo se ha utilizado el popular motor de videojuego Unity como herramienta principal. También se han empleado metodologías ágiles de desarrollo además de otras técnicas de ingeniería del software. El resultado es Riskmática, un videojuego educativo de dominación geográfica en el que el arma más eficaz es el conocimiento. Los jugadores deberán conquistar territorios a sus adversarios mediante la respuesta de preguntas de carácter matemático. Además el videojuego cuenta con la funcionalidad necesaria para configurar una partida e introducir nuevas preguntas. Como conlusión, este proyecto ha logrado crear una tecnología educativa muy atractiva para los alumnos con la que los profesores pueden mejorar la enseñanza de las matemáticas.