960 resultados para Computer Engineering|Electrical engineering


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Observability measures the support of computer systems to accurately capture, analyze, and present (collectively observe) the internal information about the systems. Observability frameworks play important roles for program understanding, troubleshooting, performance diagnosis, and optimizations. However, traditional solutions are either expensive or coarse-grained, consequently compromising their utility in accommodating today’s increasingly complex software systems. New solutions are emerging for VM-based languages due to the full control language VMs have over program executions. Existing such solutions, nonetheless, still lack flexibility, have high overhead, or provide limited context information for developing powerful dynamic analyses. In this thesis, we present a VM-based infrastructure, called marker tracing framework (MTF), to address the deficiencies in the existing solutions for providing better observability for VM-based languages. MTF serves as a solid foundation for implementing fine-grained low-overhead program instrumentation. Specifically, MTF allows analysis clients to: 1) define custom events with rich semantics ; 2) specify precisely the program locations where the events should trigger; and 3) adaptively enable/disable the instrumentation at runtime. In addition, MTF-based analysis clients are more powerful by having access to all information available to the VM. To demonstrate the utility and effectiveness of MTF, we present two analysis clients: 1) dynamic typestate analysis with adaptive online program analysis (AOPA); and 2) selective probabilistic calling context analysis (SPCC). In addition, we evaluate the runtime performance of MTF and the typestate client with the DaCapo benchmarks. The results show that: 1) MTF has acceptable runtime overhead when tracing moderate numbers of marker events; and 2) AOPA is highly effective in reducing the event frequency for the dynamic typestate analysis; and 3) language VMs can be exploited to offer greater observability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we consider the problem of topology design for optical networks. We investigate the problem of selecting switching sites to minimize total cost of the optical network. The cost of an optical network can be expressed as a sum of three main factors: the site cost, the link cost, and the switch cost. To the best of our knowledge, this problem has not been studied in its general form as investigated in this paper. We present a mixed integer quadratic programming (MIQP) formulation of the problem to find the optimal value of the total network cost. We also present an efficient heuristic to approximate the solution in polynomial time. The experimental results show good performance of the heuristic. The value of the total network cost computed by the heuristic varies within 2% to 21% of its optimal value in the experiments with 10 nodes. The total network cost computed by the heuristic for 51% of the experiments with 10 node network topologies varies within 8% of its optimal value. We also discuss the insight gained from our experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[ES] El Trabajo Final de Grado tiene por finalidad ofrecer una solución que ayude a las personas a gestionar sus tareas tanto personales como empresariales de una manera más productiva. Actualmente este tipo de aplicaciones tienen mucho éxito. Se decidió que el desarrollo de esta aplicación fuera con la metodología Getting Things Done (GTD) ya que es una metodología que aumenta la productividad y reduce el estrés laboral. A día de hoy, no hay muchas aplicaciones que utilice esta metodología y las que la utilizan lo hace de una forma muy básica. Junto a esta metodología y guiándonos de la experiencia del tutor se intentó combinar esta metodología con controles de tiempo para mejorar aún más la productividad de las personas que utiliza dicho software. El resultado obtenido de este trabajo final de grado fue la base de una aplicación web para la gestión de tareas. El software creado es totalmente funcional, muy fácil de usar, muy intuitivo, y usa la filosofía Getting Things Done . Básicamente los objetivos principales conseguidos en este proyecto fueron: la gestión de usuarios. La gestión de tareas y proyectos. Aplicación de la metodología GTD. Control del tiempo productivo, e improductivo, interrupciones, temporizadores. La aplicación ha sido realizada como Trabajo Final de Grado en Ingeniería Informática, cumpliendo con todas las fases del desarrollo del software, para obtener un producto funcional que fuera aprobado por el tutor que haría el rol de potencial cliente. En el presente proyecto se ha seguido la metodología RUP, dirigida por casos de uso, iterativa e incremental. Para completar el proceso se ha realizado la elaboración de una lista de características, la especificación de los casos de uso, una fase de análisis, una de diseño, implementación y prueba. Las tecnologías utilizadas han sido, principalmente, Ruby On Rails, HTML5, CSS , AJAX y JAVASCRIPT. El objetivo a largo plazo es que esta solución pueda ser tomada como base de implementación, donde haciendo las mejoras necesarias se pueda poner en el mercado un gran software de gestión de tareas siguiendo la metodología GTD.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Generic object recognition is an important function of the human visual system and everybody finds it highly useful in their everyday life. For an artificial vision system it is a really hard, complex and challenging task because instances of the same object category can generate very different images, depending of different variables such as illumination conditions, the pose of an object, the viewpoint of the camera, partial occlusions, and unrelated background clutter. The purpose of this thesis is to develop a system that is able to classify objects in 2D images based on the context, and identify to which category the object belongs to. Given an image, the system can classify it and decide the correct categorie of the object. Furthermore the objective of this thesis is also to test the performance and the precision of different supervised Machine Learning algorithms in this specific task of object image categorization. Through different experiments the implemented application reveals good categorization performances despite the difficulty of the problem. However this project is open to future improvement; it is possible to implement new algorithms that has not been invented yet or using other techniques to extract features to make the system more reliable. This application can be installed inside an embedded system and after trained (performed outside the system), so it can become able to classify objects in a real-time. The information given from a 3D stereocamera, developed inside the department of Computer Engineering of the University of Bologna, can be used to improve the accuracy of the classification task. The idea is to segment a single object in a scene using the depth given from a stereocamera and in this way make the classification more accurate.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: In protein sequence classification, identification of the sequence motifs or n-grams that can precisely discriminate between classes is a more interesting scientific question than the classification itself. A number of classification methods aim at accurate classification but fail to explain which sequence features indeed contribute to the accuracy. We hypothesize that sequences in lower denominations (n-grams) can be used to explore the sequence landscape and to identify class-specific motifs that discriminate between classes during classification. Discriminative n-grams are short peptide sequences that are highly frequent in one class but are either minimally present or absent in other classes. In this study, we present a new substitution-based scoring function for identifying discriminative n-grams that are highly specific to a class. Results: We present a scoring function based on discriminative n-grams that can effectively discriminate between classes. The scoring function, initially, harvests the entire set of 4- to 8-grams from the protein sequences of different classes in the dataset. Similar n-grams of the same size are combined to form new n-grams, where the similarity is defined by positive amino acid substitution scores in the BLOSUM62 matrix. Substitution has resulted in a large increase in the number of discriminatory n-grams harvested. Due to the unbalanced nature of the dataset, the frequencies of the n-grams are normalized using a dampening factor, which gives more weightage to the n-grams that appear in fewer classes and vice-versa. After the n-grams are normalized, the scoring function identifies discriminative 4- to 8-grams for each class that are frequent enough to be above a selection threshold. By mapping these discriminative n-grams back to the protein sequences, we obtained contiguous n-grams that represent short class-specific motifs in protein sequences. Our method fared well compared to an existing motif finding method known as Wordspy. We have validated our enriched set of class-specific motifs against the functionally important motifs obtained from the NLSdb, Prosite and ELM databases. We demonstrate that this method is very generic; thus can be widely applied to detect class-specific motifs in many protein sequence classification tasks. Conclusion: The proposed scoring function and methodology is able to identify class-specific motifs using discriminative n-grams derived from the protein sequences. The implementation of amino acid substitution scores for similarity detection, and the dampening factor to normalize the unbalanced datasets have significant effect on the performance of the scoring function. Our multipronged validation tests demonstrate that this method can detect class-specific motifs from a wide variety of protein sequence classes with a potential application to detecting proteome-specific motifs of different organisms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Neuromorphic computing has become an emerging field in wide range of applications. Its challenge lies in developing a brain-inspired architecture that can emulate human brain and can work for real time applications. In this report a flexible neural architecture is presented which consists of 128 X 128 SRAM crossbar memory and 128 spiking neurons. For Neuron, digital integrate and fire model is used. All components are designed in 45nm technology node. The core can be configured for certain Neuron parameters, Axon types and synapses states and are fully digitally implemented. Learning for this architecture is done offline. To train this circuit a well-known algorithm Restricted Boltzmann Machine (RBM) is used and linear classifiers are trained at the output of RBM. Finally, circuit was tested for handwritten digit recognition application. Future prospects for this architecture are also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Obesity is becoming an epidemic phenomenon in most developed countries. The fundamental cause of obesity and overweight is an energy imbalance between calories consumed and calories expended. It is essential to monitor everyday food intake for obesity prevention and management. Existing dietary assessment methods usually require manually recording and recall of food types and portions. Accuracy of the results largely relies on many uncertain factors such as user's memory, food knowledge, and portion estimations. As a result, the accuracy is often compromised. Accurate and convenient dietary assessment methods are still blank and needed in both population and research societies. In this thesis, an automatic food intake assessment method using cameras, inertial measurement units (IMUs) on smart phones was developed to help people foster a healthy life style. With this method, users use their smart phones before and after a meal to capture images or videos around the meal. The smart phone will recognize food items and calculate the volume of the food consumed and provide the results to users. The technical objective is to explore the feasibility of image based food recognition and image based volume estimation. This thesis comprises five publications that address four specific goals of this work: (1) to develop a prototype system with existing methods to review the literature methods, find their drawbacks and explore the feasibility to develop novel methods; (2) based on the prototype system, to investigate new food classification methods to improve the recognition accuracy to a field application level; (3) to design indexing methods for large-scale image database to facilitate the development of new food image recognition and retrieval algorithms; (4) to develop novel convenient and accurate food volume estimation methods using only smart phones with cameras and IMUs. A prototype system was implemented to review existing methods. Image feature detector and descriptor were developed and a nearest neighbor classifier were implemented to classify food items. A reedit card marker method was introduced for metric scale 3D reconstruction and volume calculation. To increase recognition accuracy, novel multi-view food recognition algorithms were developed to recognize regular shape food items. To further increase the accuracy and make the algorithm applicable to arbitrary food items, new food features, new classifiers were designed. The efficiency of the algorithm was increased by means of developing novel image indexing method in large-scale image database. Finally, the volume calculation was enhanced through reducing the marker and introducing IMUs. Sensor fusion technique to combine measurements from cameras and IMUs were explored to infer the metric scale of the 3D model as well as reduce noises from these sensors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sensor networks have been an active research area in the past decade due to the variety of their applications. Many research studies have been conducted to solve the problems underlying the middleware services of sensor networks, such as self-deployment, self-localization, and synchronization. With the provided middleware services, sensor networks have grown into a mature technology to be used as a detection and surveillance paradigm for many real-world applications. The individual sensors are small in size. Thus, they can be deployed in areas with limited space to make unobstructed measurements in locations where the traditional centralized systems would have trouble to reach. However, there are a few physical limitations to sensor networks, which can prevent sensors from performing at their maximum potential. Individual sensors have limited power supply, the wireless band can get very cluttered when multiple sensors try to transmit at the same time. Furthermore, the individual sensors have limited communication range, so the network may not have a 1-hop communication topology and routing can be a problem in many cases. Carefully designed algorithms can alleviate the physical limitations of sensor networks, and allow them to be utilized to their full potential. Graphical models are an intuitive choice for designing sensor network algorithms. This thesis focuses on a classic application in sensor networks, detecting and tracking of targets. It develops feasible inference techniques for sensor networks using statistical graphical model inference, binary sensor detection, events isolation and dynamic clustering. The main strategy is to use only binary data for rough global inferences, and then dynamically form small scale clusters around the target for detailed computations. This framework is then extended to network topology manipulation, so that the framework developed can be applied to tracking in different network topology settings. Finally the system was tested in both simulation and real-world environments. The simulations were performed on various network topologies, from regularly distributed networks to randomly distributed networks. The results show that the algorithm performs well in randomly distributed networks, and hence requires minimum deployment effort. The experiments were carried out in both corridor and open space settings. A in-home falling detection system was simulated with real-world settings, it was setup with 30 bumblebee radars and 30 ultrasonic sensors driven by TI EZ430-RF2500 boards scanning a typical 800 sqft apartment. Bumblebee radars are calibrated to detect the falling of human body, and the two-tier tracking algorithm is used on the ultrasonic sensors to track the location of the elderly people.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Das heutige Leben der Menschen ist vom Internet durchdrungen, kaum etwas ist nicht „vernetzt“ oder „elektronisch verfügbar“. Die Welt befindet sich im Wandel, die „Informationsgesellschaft“ konsumiert in Echtzeit Informationen auf mobilen Endgeräten, unabhängig von Zeit und Ort. Dies gilt teilweise auch für den Aus- und Weiterbildungssektor: Unter „E-Learning“ versteht man die elektronische Unterstützung des Lernens. Gelernt wird „online“; Inhalte sind digital verfügbar. Zudem hat sich die Lebenssituation der sogenannten „Digital Natives“, der jungen Individuen in der Informationsgesellschaft, verändert. Sie fordern zeitlich und räumlich flexible Ausbildungssysteme, erwarten von Bildungsinstitutionen umfassende digitale Verfügbarkeit von Informationen und möchten ihr Leben nicht mehr Lehr- und Zeitplänen unterordnen – das Lernen soll zum eigenen Leben passen, lebensbegleitend stattfinden. Neue „Lernszenarien“, z.B. für alleinerziehende Teilzeitstudierende oder Berufstätige, sollen problemlos möglich werden. Dies soll ein von der europäischen Union erarbeitetes Paradigma leisten, das unter dem Terminus „Lebenslanges Lernen“ zusammengefasst ist. Sowohl E-Learning, als auch Lebenslanges Lernen gewinnen an Bedeutung, denn die (deutsche) Wirtschaft thematisiert den „Fachkräftemangel“. Die Nachfrage nach speziell ausgebildeten Ingenieuren im MINT-Bereich soll schnellstmöglich befriedigt, die „Mitarbeiterlücke“ geschlossen werden, um so weiterhin das Wachstum und den Wohlstand zu sichern. Spezielle E-Learning-Lösungen für den MINT-Bereich haben das Potential, eine schnelle sowie flexible Aus- und Weiterbildung für Ingenieure zu bieten, in der Fachwissen bezogen auf konkrete Anforderungen der Industrie vermittelt wird. Momentan gibt es solche Systeme allerdings noch nicht. Wie sehen die Anforderungen im MINT-Bereich an eine solche E-Learning-Anwendung aus? Sie muss neben neuen Technologien vor allem den funktionalen Anforderungen des MINTBereichs, den verschiedenen Zielgruppen (wie z.B. Bildungsinstitutionen, Lerner oder „Digital Natives“, Industrie) und dem Paradigma des Lebenslangen Lernens gerecht werden, d.h. technische und konzeptuelle Anforderungen zusammenführen. Vor diesem Hintergrund legt die vorliegende Arbeit ein Rahmenwerk für die Erstellung einer solchen Lösung vor. Die praktischen Ergebnisse beruhen auf dem Blended E-Learning-System des Projekts „Technische Informatik Online“ (VHN-TIO).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Capitalizing on the power of the Internet, the Houston Academy of Medicine-Texas Medical Center (HAM-TMC) Library is using search to reinvent itself in this digital age. By using the Vivisimo Velocity Search Platform to search its multiple repositories, the library has helped users find information they never knew existed as well as positioned the library as a thought leader in its community.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A recent study elaborated by Vicerrectorado de Ordenación Académica y Planificación Estratégica of Technical University of Madrid (UPM) defines the satisfaction of the university student body as "the response that the University offers to the expectations and demands of service of the students, considered in a general way ". Besides an indicator of academic and institutional insertion of the student, the assessment of student engagement allows us to adapt the academic offer and the extension services of the University to the real needs of the students. The process of convergence towards the European Higher Education Area (EHEA) raises the need to form in competitions, that is to say, of developing in our students capacities and knowledge beyond the purely theoretical-practical thing. Therefore, the perception and experience of the educational process and environment by the students is an important issue to be addressed to accomplish their expectations and achieve a curriculum accordingly to EHEA expectations. The present study aims to explore the student motivation and approval of the educational environment at the UPM. To this end a total of 97 students enrolled in the undergraduate program of Civil Engineering, Computer Engineering and Agronomic Engineering at UPM were surveyed. The survey consisted of 40 questions divided in three blocks. The first one of 20 questions of personal character in that they were gathering, besides the sex and the age, the degree of fulfilment, implication and dedication with the institution and the academic tasks. In the second block we identify 10 questions related to the perception of the student on the teaching quality, and finally a block of 10 questions regarding the Bologna Process. The students personal motivation was moderately high, with a score of 3.6 (all scores are provided on a 5-point scale), being the most valuable items obtaining a university degree (4,3) and the friendship between students (4,2). Any significant difference was shown between sexes (P=0.23) since the averages for this block of questions were of 3.7±0.3 and 3.5±0.4 for women and men respectively. The students are moderately satisfied with their graduate studies with an average score of 3,2, being the questions that reflect a minor satisfaction the research profile of the teachers (2,8) and the organization of the Schools (2,9). The best valued questions are related to the usefulness and quality of the degrees, with 3,5 and 3,4 respectively, and to the interest of the courses within the degree (3,4). For sexes, the results of this block of questions are similar (3.1±0.3 and 3.2±0.3 for men and women respectively=0.79). Also, there were no differences (P=0.39) between the students who arrange work and studies or do not work (3.1±0.2 and 3.2±0.3 respectively). In conclusion, students at UPM present an acceptable degree of motivation and satisfaction with regard to the studies and services that offer their respective Schools. Both characteristics receive the same value both for men and for women and so much for students who arrange work and studies as for those who devote themselves only to studying. In a significant way, students who are more engaged and are in-class attendants present the major degree of satisfaction.Overall, there is a great lack of information regarding the Bologna Process. In fact to the majority, they would like to know more on what it is, what it means and what changes will involve its implementation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Antecedentes: Esta investigación se enmarca principalmente en la replicación y secundariamente en la síntesis de experimentos en Ingeniería de Software (IS). Para poder replicar, es necesario disponer de todos los detalles del experimento original. Sin embargo, la descripción de los experimentos es habitualmente incompleta debido a la existencia de conocimiento tácito y a la existencia de otros problemas tales como: La carencia de un formato estándar de reporte, la inexistencia de herramientas que den soporte a la generación de reportes experimentales, etc. Esto provoca que no se pueda reproducir fielmente el experimento original. Esta problemática limita considerablemente la capacidad de los experimentadores para llevar a cabo replicaciones y por ende síntesis de experimentos. Objetivo: La investigación tiene como objetivo formalizar el proceso experimental en IS, de modo que facilite la comunicación de información entre experimentadores. Contexto: El presente trabajo de tesis doctoral ha sido desarrollado en el seno del Grupo de Investigación en Ingeniería del Software Empírica (GrISE) perteneciente a la Escuela Técnica Superior de Ingenieros Informáticos (ETSIINF) de la Universidad Politécnica de Madrid (UPM), como parte del proyecto TIN2011-23216 denominado “Tecnologías para la Replicación y Síntesis de Experimentos en Ingeniería de Software”, el cual es financiado por el Gobierno de España. El grupo GrISE cumple a la perfección con los requisitos necesarios (familia de experimentos establecida, con al menos tres líneas experimentales y una amplia experiencia en replicaciones (16 replicaciones hasta 2011 en la línea de técnicas de pruebas de software)) y ofrece las condiciones para que la investigación se lleve a cabo de la mejor manera, como por ejemplo, el acceso total a su información. Método de Investigación: Para cumplir este objetivo se opta por Action Research (AR) como el método de investigación más adecuado a las características de la investigación, para obtener resultados a través de aproximaciones sucesivas que abordan los problemas concretos de comunicación entre experimentadores. Resultados: Se formalizó el modelo conceptual del ciclo experimental desde la perspectiva de los 3 roles principales que representan los experimentadores en el proceso experimental, siendo estos: Gestor de la Investigación (GI), Gestor del Experimento (GE) y Experimentador Senior (ES). Por otra parte, se formalizó el modelo del ciclo experimental, a través de: Un workflow del ciclo y un diagrama de procesos. Paralelamente a la formalización del proceso experimental en IS, se desarrolló ISRE (de las siglas en inglés Infrastructure for Sharing and Replicating Experiments), una prueba de concepto de entorno de soporte a la experimentación en IS. Finalmente, se plantearon guías para el desarrollo de entornos de soporte a la experimentación en IS, en base al estudio de las características principales y comunes de los modelos de las herramientas de soporte a la experimentación en distintas disciplinas experimentales. Conclusiones: La principal contribución de la investigación esta representada por la formalización del proceso experimental en IS. Los modelos que representan la formalización del ciclo experimental, así como la herramienta ISRE, construida a modo de evaluación de los modelos, fueron encontrados satisfactorios por los experimentadores del GrISE. Para consolidar la validez de la formalización, consideramos que este estudio debería ser replicado en otros grupos de investigación representativos en la comunidad de la IS experimental. Futuras Líneas de Investigación: El cumplimiento de los objetivos, de la mano con los hallazgos alcanzados, han dado paso a nuevas líneas de investigación, las cuales son las siguientes: (1) Considerar la construcción de un mecanismo para facilitar el proceso de hacer explícito el conocimiento tácito de los experimentadores por si mismos de forma colaborativa y basados en el debate y el consenso , (2) Continuar la investigación empírica en el mismo grupo de investigación hasta cubrir completamente el ciclo experimental (por ejemplo: experimentos nuevos, síntesis de resultados, etc.), (3) Replicar el proceso de investigación en otros grupos de investigación en ISE, y (4) Renovar la tecnología de la prueba de concepto, tal que responda a las restricciones y necesidades de un entorno real de investigación. ABSTRACT Background: This research addresses first and foremost the replication and also the synthesis of software engineering (SE) experiments. Replication is impossible without access to all the details of the original experiment. But the description of experiments is usually incomplete because knowledge is tacit, there is no standard reporting format or there are hardly any tools to support the generation of experimental reports, etc. This means that the original experiment cannot be reproduced exactly. These issues place considerable constraints on experimenters’ options for carrying out replications and ultimately synthesizing experiments. Aim: The aim of the research is to formalize the SE experimental process in order to facilitate information communication among experimenters. Context: This PhD research was developed within the empirical software engineering research group (GrISE) at the Universidad Politécnica de Madrid (UPM)’s School of Computer Engineering (ETSIINF) as part of project TIN2011-23216 entitled “Technologies for Software Engineering Experiment Replication and Synthesis”, which was funded by the Spanish Government. The GrISE research group fulfils all the requirements (established family of experiments with at least three experimental lines and lengthy replication experience (16 replications prior to 2011 in the software testing techniques line)) and provides favourable conditions for the research to be conducted in the best possible way, like, for example, full access to information. Research Method: We opted for action research (AR) as the research method best suited to the characteristics of the investigation. Results were generated successive rounds of AR addressing specific communication problems among experimenters. Results: The conceptual model of the experimental cycle was formalized from the viewpoint of three key roles representing experimenters in the experimental process. They were: research manager, experiment manager and senior experimenter. The model of the experimental cycle was formalized by means of a workflow and a process diagram. In tandem with the formalization of the SE experimental process, infrastructure for sharing and replicating experiments (ISRE) was developed. ISRE is a proof of concept of a SE experimentation support environment. Finally, guidelines for developing SE experimentation support environments were designed based on the study of the key features that the models of experimentation support tools for different experimental disciplines had in common. Conclusions: The key contribution of this research is the formalization of the SE experimental process. GrISE experimenters were satisfied with both the models representing the formalization of the experimental cycle and the ISRE tool built in order to evaluate the models. In order to further validate the formalization, this study should be replicated at other research groups representative of the experimental SE community. Future Research Lines: The achievement of the aims and the resulting findings have led to new research lines, which are as follows: (1) assess the feasibility of building a mechanism to help experimenters collaboratively specify tacit knowledge based on debate and consensus, (2) continue empirical research at the same research group in order to cover the remainder of the experimental cycle (for example, new experiments, results synthesis, etc.), (3) replicate the research process at other ESE research groups, and (4) update the tools of the proof of concept in order to meet the constraints and needs of a real research environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of Project Based Learning has spread widely over the last decades, not only throughout countries but also among disciplines. One of the most significant characteristics of this methodology is the use of ill-structured problems as central activity during the course, which represents an important difficulty for both teachers and students. This work presents a model, supported by a tool, focused on helping teachers and students in Project Based Learning, overcoming these difficulties. Firstly, teachers are guided in designing the project following the main principles of this methodology. Once the project has been specified at the desired level of depth, the same tool helps students to finish the project specification and organize the implementation. Collaborative work among different users is allowed in both phases. This tool has been satisfactorily tested designing two real projects used in Computer Engineering and Software Engineering degrees.