894 resultados para software licenses
Resumo:
There is no empirical evidence whatsoever to support most of the beliefs on which software construction is based. We do not yet know the adequacy, limits, qualities, costs and risks of the technologies used to develop software. Experimentation helps to check and convert beliefs and opinions into facts. This research is concerned with the replication area. Replication is a key component for gathering empirical evidence on software development that can be used in industry to build better software more efficiently. Replication has not been an easy thing to do in software engineering (SE) because the experimental paradigm applied to software development is still immature. Nowadays, a replication is executed mostly using a traditional replication package. But traditional replication packages do not appear, for some reason, to have been as effective as expected for transferring information among researchers in SE experimentation. The trouble spot appears to be the replication setup, caused by version management problems with materials, instruments, documents, etc. This has proved to be an obstacle to obtaining enough details about the experiment to be able to reproduce it as exactly as possible. We address the problem of information exchange among experimenters by developing a schema to characterize replications. We will adapt configuration management and product line ideas to support the experimentation process. This will enable researchers to make systematic decisions based on explicit knowledge rather than assumptions about replications. This research will output a replication support web environment. This environment will not only archive but also manage experimental materials flexibly enough to allow both similar and differentiated replications with massive experimental data storage. The platform should be accessible to several research groups working together on the same families of experiments.
Resumo:
Usability plays an important role to satisfy users? needs. There are many recommendations in the HCI literature on how to improve software usability. Our research focuses on such recommendations that affect the system architecture rather than just the interface. However, improving software usability in aspects that affect architecture increases the analyst?s workload and development complexity. This paper proposes a solution based on model-driven development. We propose representing functional usability mechanisms abstractly by means of conceptual primitives. The analyst will use these primitives to incorporate functional usability features at the early stages of the development process. Following the model-driven development paradigm, these features are then automatically transformed into subsequent steps of development, a practice that is hidden from the analyst.
Resumo:
En este keynote, la Prof. Juristo describe el paradigma experimental y cómo podría aplicarse a la ingeniería del software, destacando los desafíos de su aplicación y los logros conseguidos hasta el momento.
Resumo:
This paper describes a case study in WCET analysis of an on-board spacecraft software system. The attitude control system of UPMSat-2, an experimental micro-satellite which is scheduled to be launched in 2013, is used for an experiment on analysing the worst-case execution time of code automatically generated from a Simulink model. In order to properly test the code, a hardware-in-the-loop configuration with a simulation model of the spacecraft environment has been used as a test bench. The code has been analysed with RapiTime, with some modifications to the original instrumentation routines, in order to take into account the particularities of the test configuration. Results from the experiment are described and commented in the paper.
Resumo:
Abstract. The ASSERT project de?ned new software engineering methods and tools for the development of critical embedded real-time systems in the space domain. The ASSERT model-driven engineering process was one of the achievements of the project and is based on the concept of property- preserving model transformations. The key element of this process is that non-functional properties of the software system must be preserved during model transformations. Properties preservation is carried out through model transformations compliant with the Ravenscar Pro?le and provides a formal basis to the process. In this way, the so-called Ravenscar Computational Model is central to the whole ASSERT process. This paper describes the work done in the HWSWCO study, whose main objective has been to address the integration of the Hardware/Software co-design phase in the ASSERT process. In order to do that, non-functional properties of the software system must also be preserved during hardware synthesis. Keywords : Ada 2005, Ravenscar pro?le, Hardware/Software co-design, real- time systems, high-integrity systems, ORK
Resumo:
The analysis of the interdependence between time series has become an important field of research, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, and the introduction of concepts such as Generalized (GS) and Phase synchronization (PS). This increase in the number of approaches to tackle the existence of the so-called functional (FC) and effective connectivity (EC) (Friston 1994) between two, (or among many) neural networks, along with their mathematical complexity, makes it desirable to arrange them into a unified toolbox, thereby allowing neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of them.
Resumo:
This article shows software that allows determining the statistical behavior of qualitative data originating surveys previously transformed with a Likert’s scale to quantitative data. The main intention is offer to users a useful tool to know statistics' characteristics and forecasts of financial risks in a fast and simple way. Additionally,this paper presents the definition of operational risk. On the other hand, the article explains different techniques to do surveys with a Likert’s scale (Avila, 2008) to know expert’s opinion with the transformation of qualitative data to quantitative data. In addition, this paper will show how is very easy to distinguish an expert’s opinion related to risk, but when users have a lot of surveys and matrices is very difficult to obtain results because is necessary to compare common data. On the other hand, statistical value representative must be extracted from common data to get weight of each risk. In the end, this article exposes the development of “Qualitative Operational Risk Software” or QORS by its acronym, which has been designed to determine the root of risks in organizations and its value at operational risk OpVaR (Jorion, 2008; Chernobai et al, 2008) when input data comes from expert’s opinion and their associated matrices.
Resumo:
There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable real-time kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.
Resumo:
This paper proposes a highly automated mechanism to build an undo facility into a new or existing system easily. Our proposal is based on the observation that for a large set of operators it is not necessary to store in-memory object states or executed system commands to undo an action; the storage of input data is instead enough. This strategy simplifies greatly the design of the undo process and encapsulates most of the functionalities required in a framework structure similar to the many object-oriented programming frameworks.
Resumo:
Abstract?Background: There is no globally accepted open source software development process to define how open source software is developed in practice. A process description is important for coordinating all the software development activities involving both people and technology. Aim: The research question that this study sets out to answer is: What activities do open source software process models contain? The activity groups on which it focuses are Concept Exploration, Software Requirements, Design, Maintenance and Evaluation. Method: We conduct a systematic mapping study (SMS). A SMS is a form of systematic literature review that aims to identify and classify available research papers concerning a particular issue. Results: We located a total of 29 primary studies, which we categorized by the open source software project that they examine and by activity types (Concept Exploration, Software Requirements, Design, Maintenance and Evaluation). The activities present in most of the open source software development processes were Execute Tests and Conduct Reviews, which belong to the Evaluation activities group. Maintenance is the only group that has primary studies addressing all the activities that it contains. Conclusions: The primary studies located by the SMS are the starting point for analyzing the open source software development process and proposing a process model for this community. The papers in our paper pool that describe a specific open source software project provide more regarding our research question than the papers that talk about open source software development without referring to a specific open source software project.
Resumo:
Este método, utiliza los resultados facilitados por la aplicación informática IAHRIS v2.2. (Índices de Alteración Hidrológica en RÍoS) para: - Establecer un conjunto de escenarios de regímenes ambientales, definidos con base hidrológica y tomando como referente al régimen natural. - Valorar ambientalmente mediante los Índices de Alteración Hidrológica la “distancia” de cada escenario respecto al de referencia. - Valorar realizando una simulación plurianual, la demanda en recurso que cada escenario conlleva. - Representar conjuntamente para todos los escenarios estudiados, su valoración ambiental y su demanda. Esta representación constituye una herramienta que permite interpretar fácilmente dos de los principales aspectos implicados en la toma de decisiones a la hora de seleccionar un régimen ambiental: la mejora respecto a la situación actual, en lo que a alteración del régimen se refiere, y la cantidad de agua necesaria. Como ejemplo de aplicación se presenta el caso del río Júcar en el embalse de Alarcón.
Resumo:
In this work a complete hardware-software support platform for a WSN testbed focused on developing wireless sensor applications in a simple and intuitive way is presented, as an alternative of commercial-motes-based testbeds that can be found in the state of the art. The main target of this hardware-software platform is to provide the highest abstraction level on the management of WSNs but in the simplest way in order to achieve a fast profiling mechanism for reliable prototyping based on the Cookies platform as well as helping users to develop, test and validate Cookie-Based WSN applications.
Resumo:
In this work a complete set of libraries for developing wireless sensor applications in a simple and intuitive way is presented, in contraposition to the most spread application abstraction-level mechanisms based on operating systems. The main target of this software platform, named CookieLibs, is to provide the highest abstraction level on the management of WSNs but in the simplest way for those users who are not familiar with software design, in order to achieve a fast profiling mechanism for reliable prototyping based on the Cookies platform.
Resumo:
This dissertation discusses how different practitioners define project success and success factors for software projects and products. The motivation for this work is to identify the way software practitioners’ value and define project success. This can have implications for both practitioner motivation and software development productivity. Accordingly, in this work, we are interested in the various perceptions of the term “success” for different software practitioners and researchers. To get this information we performed a systematic mapping of the recent year’s software development literature trying to identify stakeholders’ perceptions about the success of a project and also possible differences among the views of the various stakeholders of a project. Some common terms related to project success (success project; software project success factors) were considered in formulating the search strings. The results were limited to twenty-two selected peer-reviewed conferences, papers/journal articles, published between 2003 and 2012.
Resumo:
ome free, open-source software projects have been around for quite a long time, the longest living ones dating from the early 1980s. For some of them, detailed information about their evolution is available in source code management systems tracking all their code changes for periods of more than 15 years. This paper examines in detail the evolution of one of such projects, glibc, with the main aim of understanding how it evolved and how it matched Lehman's laws of software evolution. As a result, we have developed a methodology for studying the evolution of such long-lived projects based on the information in their source code management repository, described in detail several aspects of the history of glibc, including some activity and size metrics, and found how some of the laws of software evolution may not hold in this case