947 resultados para Factory of software
Resumo:
This paper presents and validates a methodology for integrating reusable software components in diverse game engines. While conforming to the RAGE com-ponent-based architecture described elsewhere, the paper explains how the interac-tions and data exchange processes between a reusable software component and a game engine should be implemented for procuring seamless integration. To this end, a RAGE-compliant C# software component providing a difficulty adaptation routine was integrated with an exemplary strategic tile-based game “TileZero”. Implementa-tions in MonoGame, Unity and Xamarin, respectively, have demonstrated successful portability of the adaptation component. Also, portability across various delivery platforms (Windows desktop, iOS, Android, Windows Phone) was established. Thereby this study has established the validity of the RAGE architecture and its un-derlying interaction processes for the cross-platform and cross-game engine reuse of software components. The RAGE architecture thereby accommodates the large scale development and application of reusable software components for serious gaming.
Resumo:
The objective of D6.1 is to make the Ecosystem software platform with underlying Software Repository, Digital Library and Media Archive available to the degree, that the RAGE project can start collecting content in the form of software assets, and documents of various media types. This paper describes the current state of the Ecosystem as of month 12 of the project, and documents the structure of the Ecosystem, individual components, integration strategies, and overall approach. The deliverable itself is the deployment of the described components, which is now available to collect and curate content. Whilst this version is not yet feature complete, full realization is expected within the next few months. Following this development, WP6 will continue to add features driven by the business models to be defined by WP7 later on in the project.
Resumo:
The large upfront investments required for game development pose a severe barrier for the wider uptake of serious games in education and training. Also, there is a lack of well-established methods and tools that support game developers at preserving and enhancing the games’ pedagogical effectiveness. The RAGE project, which is a Horizon 2020 funded research project on serious games, addresses these issues by making available reusable software components that aim to support the pedagogical qualities of serious games. In order to easily deploy and integrate these game components in a multitude of game engines, platforms and programming languages, RAGE has developed and validated a hybrid component-based software architecture that preserves component portability and interoperability. While a first set of software components is being developed, this paper presents selected examples to explain the overall system’s concept and its practical benefits. First, the Emotion Detection component uses the learners’ webcams for capturing their emotional states from facial expressions. Second, the Performance Statistics component is an add-on for learning analytics data processing, which allows instructors to track and inspect learners’ progress without bothering about the required statistics computations. Third, a set of language processing components accommodate the analysis of textual inputs of learners, facilitating comprehension assessment and prediction. Fourth, the Shared Data Storage component provides a technical solution for data storage - e.g. for player data or game world data - across multiple software components. The presented components are exemplary for the anticipated RAGE library, which will include up to forty reusable software components for serious gaming, addressing diverse pedagogical dimensions.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Developing a theoretical framework for pervasive information environments is an enormous goal. This paper aims to provide a small step towards such a goal. The following pages report on our initial investigations to devise a framework that will continue to support locative, experiential and evaluative data from ‘user feedback’ in an increasingly pervasive information environment. We loosely attempt to outline this framework by developing a methodology capable of moving from rapid-deployment of software and hardware technologies, towards a goal of realistic immersive experience of pervasive information. We propose various technical solutions and address a range of problems such as; information capture through a novel model of sensing, processing, visualization and cognition.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Le dimensionnement basé sur la performance (DBP), dans une approche déterministe, caractérise les objectifs de performance par rapport aux niveaux de performance souhaités. Les objectifs de performance sont alors associés à l'état d'endommagement et au niveau de risque sismique établis. Malgré cette approche rationnelle, son application est encore difficile. De ce fait, des outils fiables pour la capture de l'évolution, de la distribution et de la quantification de l'endommagement sont nécessaires. De plus, tous les phénomènes liés à la non-linéarité (matériaux et déformations) doivent également être pris en considération. Ainsi, cette recherche montre comment la mécanique de l'endommagement pourrait contribuer à résoudre cette problématique avec une adaptation de la théorie du champ de compression modifiée et d'autres théories complémentaires. La formulation proposée adaptée pour des charges monotones, cycliques et de type pushover permet de considérer les effets non linéaires liés au cisaillement couplé avec les mécanismes de flexion et de charge axiale. Cette formulation est spécialement appliquée à l'analyse non linéaire des éléments structuraux en béton soumis aux effets de cisaillement non égligeables. Cette nouvelle approche mise en œuvre dans EfiCoS (programme d'éléments finis basé sur la mécanique de l'endommagement), y compris les critères de modélisation, sont également présentés ici. Des calibrations de cette nouvelle approche en comparant les prédictions avec des données expérimentales ont été réalisées pour les murs de refend en béton armé ainsi que pour des poutres et des piliers de pont où les effets de cisaillement doivent être pris en considération. Cette nouvelle version améliorée du logiciel EFiCoS a démontrée être capable d'évaluer avec précision les paramètres associés à la performance globale tels que les déplacements, la résistance du système, les effets liés à la réponse cyclique et la quantification, l'évolution et la distribution de l'endommagement. Des résultats remarquables ont également été obtenus en référence à la détection appropriée des états limites d'ingénierie tels que la fissuration, les déformations unitaires, l'éclatement de l'enrobage, l'écrasement du noyau, la plastification locale des barres d'armature et la dégradation du système, entre autres. Comme un outil pratique d'application du DBP, des relations entre les indices d'endommagement prédits et les niveaux de performance ont été obtenus et exprimés sous forme de graphiques et de tableaux. Ces graphiques ont été développés en fonction du déplacement relatif et de la ductilité de déplacement. Un tableau particulier a été développé pour relier les états limites d'ingénierie, l'endommagement, le déplacement relatif et les niveaux de performance traditionnels. Les résultats ont démontré une excellente correspondance avec les données expérimentales, faisant de la formulation proposée et de la nouvelle version d'EfiCoS des outils puissants pour l'application de la méthodologie du DBP, dans une approche déterministe.
Resumo:
The availability of CFD software that can easily be used and produce high efficiency on a wide range of parallel computers is extremely limited. The investment and expertise required to parallelise a code can be enormous. In addition, the cost of supercomputers forces high utilisation to justify their purchase, requiring a wide range of software. To break this impasse, tools are urgently required to assist in the parallelisation process that dramatically reduce the parallelisation time but do not degrade the performance of the resulting parallel software. In this paper we discuss enhancements to the Computer Aided Parallelisation Tools (CAPTools) to assist in the parallelisation of complex unstructured mesh-based computational mechanics codes.
Resumo:
This work advances a research agenda which has as its main aim the application of Abstract Algebraic Logic (AAL) methods and tools to the specification and verification of software systems. It uses a generalization of the notion of an abstract deductive system to handle multi-sorted deductive systems which differentiate visible and hidden sorts. Two main results of the paper are obtained by generalizing properties of the Leibniz congruence — the central notion in AAL. In this paper we discuss a question we posed in [1] about the relationship between the behavioral equivalences of equivalent hidden logics. We also present a necessary and sufficient intrinsic condition for two hidden logics to be equivalent.
Resumo:
The fishing sector has been suffering a strong setback, with reduction in fishing stocks and more recently with the reduction of the fishing fleet. One of the most important factors for this decrease, is related to the continuous difficulty to find fish with quality and quantity, allowing the sector work constantly all year long. However other factors are affecting negatively the fishing sector, in particular the huge maintenance costs of the ships and the high diary costs that are necessary for daily work of each vessel. One of the main costs associated with daily work, is the fuel consumption. As an example, one boat with 30 meters working around 17 hours every day, consumes 2500 liters of fuel/day. This value is very high taking into account the productivity of the sector. Supporting this premise was developed a project with the aim of reducing fuel consumption in fishing vessels. The project calls “ShipTrack” and aims the use of forecasts of ocean currents in the routes of the ships. The objective involves the use of ocean currents in favor, and avoiding ocean currents against, taking into account the course of the ship, in order to reduce fuel consumption and increase the ship speed. The methodology used underwent the creation of specific Software, in order to optimize routes, taking into account the forecasts of the ocean currents. These forecasts are performed using numerical modelling, methodology that become more and more important in all communities, because through the modeling, it can be analyzed, verified and predicted important phenomena to all the terrestrial ecosystem. The objective was the creation of Software, however its development was not completed, so it was necessary a new approach in order to verify the influence of the ocean currents in the navigation of the fishing ship "Cruz de Malta". In this new approach, and during the various ship routes it was gathering a constant information about the instant speed, instantaneous fuel consumption, the state of the ocean currents along the course of the ship, among other factors. After 4 sea travels and many routes analyzed, it was possible to verify the influence of the ocean currents in the Ship speed and in fuel consumption. For example, in many stages of the sea travels it was possible to verify an increase in speed in zones where the ocean currents are in favor to the ships movements. This incorporation of new data inside the fishing industry, was seen positively by his players, which encourages new developments in this industry.
Resumo:
We present a review of the historical evolution of software engineering, intertwining it with the history of knowledge engineering because “those who cannot remember the past are condemned to repeat it.” This retrospective represents a further step forward to understanding the current state of both types of engineerings; history has also positive experiences; some of them we would like to remember and to repeat. Two types of engineerings had parallel and divergent evolutions but following a similar pattern. We also define a set of milestones that represent a convergence or divergence of the software development methodologies. These milestones do not appear at the same time in software engineering and knowledge engineering, so lessons learned in one discipline can help in the evolution of the other one.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
Software engineering best practices allow significantly improving the software development. However, the implementation of best practices requires skilled professionals, financial investment and technical support to facilitate implementation and achieve the respective improvement. In this paper we proposes a protocol to design techniques to implement best practices of software engineering. The protocol includes the identification and selection of process to improve, the study of standards and models, identification of best practices associated with the process and the possible implementation techniques. In addition, technical design activities are defined in order to create or adapt the techniques of implementing best practices for software development.
Resumo:
El presente artículo es resultado de la investigación: “Diseño de un modelo para mejorar los procesos de estimación de costos para las empresas desarrolladoras de software”. Se presenta una revisión de la literatura a nivel internacional con el fin de identificar tendencias y métodos para realizar estimaciones de costos de software más exactas. Por medio del método predictivo Delphi, un conjunto de expertos pertenecientes al sector de software de Barranquilla clasificaron y valoraron según la probabilidad de ocurrencia cinco escenarios realistas de estimaciones. Se diseñó un experimento completamente aleatorio cuyos resultados apuntaron a dos escenarios estadísticamente similares de manera cualitativa, con lo que se construyó un modelo de análisis basado en tres agentes: Metodología, capacidad del equipo de trabajo y productos tecnológicos; cada uno con tres categorías de cumplimiento para lograr estimaciones más precisas
Resumo:
Sustainability in software system is still a new practice that most software developers and companies are trying to incorporate into their software development lifecycle and has been largely discussed in academia. Sustainability is a complex concept viewed from economic, environment and social dimensions with several definitions proposed making sometimes the concept of sustainability very fuzzy and difficult to apply and assess in software systems. This has hindered the adoption of sustainability in the software industry. A little research explores sustainability as a quality property of software products and services to answer questions such as; How to quantify sustainability as a quality construct in the same way as other quality attributes such as security, usability and reliability? How can it be applied to software systems? What are the measures and measurement scale of sustainability? The Goal of this research is to investigate the definitions, perceptions and measurement of sustainability from the quality perspective. Grounded in the general theory of software measurement, the aim is to develop a method that decomposes sustainability in factors, criteria and metrics. The Result is a method to quantify and access sustainability of software systems while incorporating management and users concern. Conclusion: The method will empower the ability of companies to easily adopt sustainability while facilitating its integration to the software development process and tools. It will also help companies to measure sustainability of their software products from economic, environmental, social, individual and technological dimension.