994 resultados para Developing Software


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Higher and further education institutions are increasingly using social software tools to support teaching and learning. A growing body of research investigates the diversity of tools and their range of contributions. However, little research has focused on investigating the role of the educator in the context of a social software initiative, even though the educator is critical for the introduction and successful use of social software in a course environment. Hence, we argue that research on social software should place greater emphasis on the educators, as their roles and activities (such as selecting the tools, developing the tasks and facilitating the student interactions on these tools) are instrumental to most aspects of a social software initiative. To this end, we have developed an agenda for future research on the role of the educator. Drawing on role theory, both as the basis for a systematic conceptualization of the educator role and as a guiding framework, we have developed a series of concrete research questions that address core issues associated with the educator roles in a social software context and provide recommendations for further investigations. By developing a research agenda we hope to stimulate research that creates a better understanding of the educator’s situation and develops guidelines to help educators carry out their social software initiatives. Considering the significant role an educator plays in the initiation and conduct of a social software initiative, our research agenda ultimately seeks to contribute to the adoption and efficient use of social software in the educational domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for efficient, sustainable, and planned utilization of resources is ever more critical. In the U.S. alone, buildings consume 34.8 Quadrillion (1015) BTU of energy annually at a cost of $1.4 Trillion. Of this energy 58% is utilized for heating and air conditioning. ^ Several building energy analysis tools have been developed to assess energy demands and lifecycle energy costs in buildings. Such analyses are also essential for an efficient HVAC design that overcomes the pitfalls of an under/over-designed system. DOE-2 is among the most widely known full building energy analysis models. It also constitutes the simulation engine of other prominent software such as eQUEST, EnergyPro, PowerDOE. Therefore, it is essential that DOE-2 energy simulations be characterized by high accuracy. ^ Infiltration is an uncontrolled process through which outside air leaks into a building. Studies have estimated infiltration to account for up to 50% of a building's energy demand. This, considered alongside the annual cost of buildings energy consumption, reveals the costs of air infiltration. It also stresses the need that prominent building energy simulation engines accurately account for its impact. ^ In this research the relative accuracy of current air infiltration calculation methods is evaluated against an intricate Multiphysics Hygrothermal CFD building envelope analysis. The full-scale CFD analysis is based on a meticulous representation of cracking in building envelopes and on real-life conditions. The research found that even the most advanced current infiltration methods, including in DOE-2, are at up to 96.13% relative error versus CFD analysis. ^ An Enhanced Model for Combined Heat and Air Infiltration Simulation was developed. The model resulted in 91.6% improvement in relative accuracy over current models. It reduces error versus CFD analysis to less than 4.5% while requiring less than 1% of the time required for such a complex hygrothermal analysis. The algorithm used in our model was demonstrated to be easy to integrate into DOE-2 and other engines as a standalone method for evaluating infiltration heat loads. This will vastly increase the accuracy of such simulation engines while maintaining their speed and ease of use characteristics that make them very widely used in building design.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software product line engineering promotes large software reuse by developing a system family that shares a set of developed core features, and enables the selection and customization of a set of variabilities that distinguish each software product family from the others. In order to address the time-to-market, the software industry has been using the clone-and-own technique to create and manage new software products or product lines. Despite its advantages, the clone-and-own approach brings several difficulties for the evolution and reconciliation of the software product lines, especially because of the code conflicts generated by the simultaneous evolution of the original software product line, called Source, and its cloned products, called Target. This thesis proposes an approach to evolve and reconcile cloned products based on mining software repositories and code conflict analysis techniques. The approach provides support to the identification of different kinds of code conflicts – lexical, structural and semantics – that can occur during development task integration – bug correction, enhancements and new use cases – from the original evolved software product line to the cloned product line. We have also conducted an empirical study of characterization of the code conflicts produced during the evolution and merging of two large-scale web information system product lines. The results of our study demonstrate the approach potential to automatically or semi-automatically solve several existing code conflicts thus contributing to reduce the complexity and costs of the reconciliation of cloned software product lines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The spread of wireless networks and growing proliferation of mobile devices require the development of mobility control mechanisms to support the different demands of traffic in different network conditions. A major obstacle to developing this kind of technology is the complexity involved in handling all the information about the large number of Moving Objects (MO), as well as the entire signaling overhead required to manage these procedures in the network. Despite several initiatives have been proposed by the scientific community to address this issue they have not proved to be effective since they depend on the particular request of the MO that is responsible for triggering the mobility process. Moreover, they are often only guided by wireless medium statistics, such as Received Signal Strength Indicator (RSSI) of the candidate Point of Attachment (PoA). Thus, this work seeks to develop, evaluate and validate a sophisticated communication infrastructure for Wireless Networking for Moving Objects (WiNeMO) systems by making use of the flexibility provided by the Software-Defined Networking (SDN) paradigm, where network functions are easily and efficiently deployed by integrating OpenFlow and IEEE 802.21 standards. For purposes of benchmarking, the analysis was conducted in the control and data planes aspects, which demonstrate that the proposal significantly outperforms typical IPbased SDN and QoS-enabled capabilities, by allowing the network to handle the multimedia traffic with optimal Quality of Service (QoS) transport and acceptable Quality of Experience (QoE) over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this extended abstract, we discuss recent research at Worcester into the inclusion of AI into ‘Serious Games’. Serious Games research intends to harness the power of computer game technology to produce educational and training materials. We prefer the name ‘Immersive Environments’ (IEs) since this emphasises the human psychological dimension. Creation of compelling and convincing learning software requires a rich engagement of the learner, and a convincing learning experience. We believe that various aspects of the AI tradition can inform the production of such learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The continuous advancement in computing, together with the decline in its cost, has resulted in technology becoming ubiquitous (Arbaugh, 2008, Gros, 2007). Technology is growing and is part of our lives in almost every respect, including the way we learn. Technology helps to collapse time and space in learning. For example, technology allows learners to engage with their instructors synchronously, in real time and also asynchronously, by enabling sessions to be recorded. Space and distance is no longer an issue provided there is adequate bandwidth, which determines the most appropriate format such text, audio or video. Technology has revolutionised the way learners learn; courses are designed; and ‘lessons’ are delivered, and continues to do so. The learning process can be made vastly more efficient as learners have knowledge at their fingertips, and unfamiliar concepts can be easily searched and an explanation found in seconds. Technology has also enabled learning to be more flexible, as learners can learn anywhere; at any time; and using different formats, e.g. text or audio. From the perspective of the instructors and L&D providers, technology offers these same advantages, plus easy scalability. Administratively, preparatory work can be undertaken more quickly even whilst student numbers grow. Learners from far and new locations can be easily accommodated. In addition, many technologies can be easily scaled to accommodate new functionality and/ or other new technologies. ‘Designing and Developing Digital and Blended Learning Solutions’ (5DBS), has been developed to recognise the growing importance of technology in L&D. This unit contains four learning outcomes and two assessment criteria, which is the same for all other units, besides Learning Outcome 3 which has three assessment criteria. The four learning outcomes in this unit are: • Learning Outcome 1: Understand current digital technologies and their contribution to learning and development solutions; • Learning Outcome 2: Be able to design blended learning solutions that make appropriate use of new technologies alongside more traditional approaches; • Learning Outcome 3: Know about the processes involved in designing and developing digital learning content efficiently and what makes for engaging and effective digital learning content; • Learning Outcome 4: Understand the issues involved in the successful implementation of digital and blended learning solutions. Each learning outcome is an individual chapter and each assessment unit is allocated its own sections within the respective chapters. This first chapter addresses the first learning outcome, which has two assessment criteria: summarise the range of currently available learning technologies; critically assess a learning requirement to determine the contribution that could be made through the use of learning technologies. The introduction to chapter one is in Section 1.0. Chapter 2 discusses the design of blended learning solutions in consideration of how digital learning technologies may support face-to-face and online delivery. Three learning theory sets: behaviourism; cognitivism; constructivism, are introduced, and the implication of each set of theory on instructional design for blended learning discussed. Chapter 3 centres on how relevant digital learning content may be created. This chapter includes a review of the key roles, tools and processes that are involved in developing digital learning content. Finally, Chapter 4 concerns delivery and implementation of digital and blended learning solutions. This chapter surveys the key formats and models used to inform the configuration of virtual learning environment software platforms. In addition, various software technologies which may be important in creating a VLE ecosystem that helps to enhance the learning experience, are outlined. We introduce the notion of personal learning environment (PLE), which has emerged from the democratisation of learning. We also review the roles, tools, standards and processes that L&D practitioners need to consider within a delivery and implementation of digital and blended learning solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing a theoretical framework for pervasive information environments is an enormous goal. This paper aims to provide a small step towards such a goal. The following pages report on our initial investigations to devise a framework that will continue to support locative, experiential and evaluative data from ‘user feedback’ in an increasingly pervasive information environment. We loosely attempt to outline this framework by developing a methodology capable of moving from rapid-deployment of software and hardware technologies, towards a goal of realistic immersive experience of pervasive information. We propose various technical solutions and address a range of problems such as; information capture through a novel model of sensing, processing, visualization and cognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Well-designed marine protected area (MPA) networks can deliver a range of ecological, economic and social benefits, and so a great deal of research has focused on developing spatial conservation prioritization tools to help identify important areas. However, whilst these software tools are designed to identify MPA networks that both represent biodiversity and minimize impacts on stakeholders, they do not consider complex ecological processes. Thus, it is difficult to determine the impacts that proposed MPAs could have on marine ecosystem health, fisheries and fisheries sustainability. Using the eastern English Channel as a case study, this paper explores an approach to address these issues by identifying a series of MPA networks using the Marxan and Marxan with Zones conservation planning software and linking them with a spatially explicit ecosystem model developed in Ecopath with Ecosim. We then use these to investigate potential trade-offs associated with adopting different MPA management strategies. Limited-take MPAs, which restrict the use of some fishing gears, could have positive benefits for conservation and fisheries in the eastern English Channel, even though they generally receive far less attention in research on MPA network design. Our findings, however, also clearly indicate that no-take MPAs should form an integral component of proposed MPA networks in the eastern English Channel, as they not only result in substantial increases in ecosystem biomass, fisheries catches and the biomass of commercially valuable target species, but are fundamental to maintaining the sustainability of the fisheries. Synthesis and applications. Using the existing software tools Marxan with Zones and Ecopath with Ecosim in combination provides a powerful policy-screening approach. This could help inform marine spatial planning by identifying potential conflicts and by designing new regulations that better balance conservation objectives and stakeholder interests. In addition, it highlights that appropriate combinations of no-take and limited-take marine protected areas might be the most effective when making trade-offs between long-term ecological benefits and short-term political acceptability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente artículo es resultado de la investigación: “Diseño de un modelo para mejorar los procesos de estimación de costos para las empresas desarrolladoras de software”. Se presenta una revisión de la literatura a nivel internacional con el fin de identificar tendencias y métodos para realizar estimaciones de costos de software más exactas. Por medio del método predictivo Delphi, un conjunto de expertos pertenecientes al sector de software de Barranquilla clasificaron y valoraron según la probabilidad de ocurrencia cinco escenarios realistas de estimaciones. Se diseñó un experimento completamente aleatorio cuyos resultados apuntaron a dos escenarios estadísticamente similares de manera cualitativa, con lo que se construyó un modelo de análisis basado en tres agentes: Metodología, capacidad del equipo de trabajo y productos tecnológicos; cada uno con tres categorías de cumplimiento para lograr estimaciones más precisas

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Part 5: Service Orientation in Collaborative Networks

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The need for efficient, sustainable, and planned utilization of resources is ever more critical. In the U.S. alone, buildings consume 34.8 Quadrillion (1015) BTU of energy annually at a cost of $1.4 Trillion. Of this energy 58% is utilized for heating and air conditioning. Several building energy analysis tools have been developed to assess energy demands and lifecycle energy costs in buildings. Such analyses are also essential for an efficient HVAC design that overcomes the pitfalls of an under/over-designed system. DOE-2 is among the most widely known full building energy analysis models. It also constitutes the simulation engine of other prominent software such as eQUEST, EnergyPro, PowerDOE. Therefore, it is essential that DOE-2 energy simulations be characterized by high accuracy. Infiltration is an uncontrolled process through which outside air leaks into a building. Studies have estimated infiltration to account for up to 50% of a building’s energy demand. This, considered alongside the annual cost of buildings energy consumption, reveals the costs of air infiltration. It also stresses the need that prominent building energy simulation engines accurately account for its impact. In this research the relative accuracy of current air infiltration calculation methods is evaluated against an intricate Multiphysics Hygrothermal CFD building envelope analysis. The full-scale CFD analysis is based on a meticulous representation of cracking in building envelopes and on real-life conditions. The research found that even the most advanced current infiltration methods, including in DOE-2, are at up to 96.13% relative error versus CFD analysis. An Enhanced Model for Combined Heat and Air Infiltration Simulation was developed. The model resulted in 91.6% improvement in relative accuracy over current models. It reduces error versus CFD analysis to less than 4.5% while requiring less than 1% of the time required for such a complex hygrothermal analysis. The algorithm used in our model was demonstrated to be easy to integrate into DOE-2 and other engines as a standalone method for evaluating infiltration heat loads. This will vastly increase the accuracy of such simulation engines while maintaining their speed and ease of use characteristics that make them very widely used in building design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The industrial context is changing rapidly due to advancements in technology fueled by the Internet and Information Technology. The fourth industrial revolution counts integration, flexibility, and optimization as its fundamental pillars, and, in this context, Human-Robot Collaboration has become a crucial factor for manufacturing sustainability in Europe. Collaborative robots are appealing to many companies due to their low installation and running costs and high degree of flexibility, making them ideal for reshoring production facilities with a short return on investment. The ROSSINI European project aims to implement a true Human-Robot Collaboration by designing, developing, and demonstrating a modular and scalable platform for integrating human-centred robotic technologies in industrial production environments. The project focuses on safety concerns related to introducing a cobot in a shared working area and aims to lay the groundwork for a new working paradigm at the industrial level. The need for a software architecture suitable to the robotic platform employed in one of three use cases selected to deploy and test the new technology was the main trigger of this Thesis. The chosen application consists of the automatic loading and unloading of raw-material reels to an automatic packaging machine through an Autonomous Mobile Robot composed of an Autonomous Guided Vehicle, two collaborative manipulators, and an eye-on-hand vision system for performing tasks in a partially unstructured environment. The results obtained during the ROSSINI use case development were later used in the SENECA project, which addresses the need for robot-driven automatic cleaning of pharmaceutical bins in a very specific industrial context. The inherent versatility of mobile collaborative robots is evident from their deployment in the two projects with few hardware and software adjustments. The positive impact of Human-Robot Collaboration on diverse production lines is a motivation for future investments in research on this increasingly popular field by the industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Thesis wants to highlight the importance of ad-hoc designed and developed embedded systems in the implementation of intelligent sensor networks. As evidence four areas of application are presented: Precision Agriculture, Bioengineering, Automotive and Structural Health Monitoring. For each field is reported one, or more, smart device design and developing, in addition to on-board elaborations, experimental validation and in field tests. In particular, it is presented the design and development of a fruit meter. In the bioengineering field, three different projects are reported, detailing the architectures implemented and the validation tests conducted. Two prototype realizations of an inner temperature measurement system in electric motors for an automotive application are then discussed. Lastly, the HW/SW design of a Smart Sensor Network is analyzed: the network features on-board data management and processing, integration in an IoT toolchain, Wireless Sensor Network developments and an AI framework for vibration-based structural assessment.