978 resultados para Software Development– metrics
Resumo:
Abstract – Background – The software effort estimation research area aims to improve the accuracy of this estimation in software projects and activities. Aims – This study describes the development and usage of a web application tocollect data generated from the Planning Poker estimation process and the analysis of the collected data to investigate the impact of revising previous estimates when conducting similar estimates in a Planning Poker context. Method – Software activities were estimated by Universidade Tecnológica Federal do Paraná (UTFPR) computer students, using Planning Poker, with and without revising previous similar activities, storing data regarding the decision-making process. And the collected data was used to investigate the impact that revising similar executed activities have in the software effort estimates' accuracy.Obtained Results – The UTFPR computer students were divided into 14 groups. Eight of them showed accuracy increase in more than half of their estimates. Three of them had almost the same accuracy in more than half of their estimates. And only three of them had loss of accuracy in more than half of their estimates. Conclusion – Reviewing the similar executed software activities, when using Planning Poker, led to more accurate software estimates in most cases, and, because of that, can improve the software development process.
Resumo:
El presente artículo es resultado de la investigación: “Diseño de un modelo para mejorar los procesos de estimación de costos para las empresas desarrolladoras de software”. Se presenta una revisión de la literatura a nivel internacional con el fin de identificar tendencias y métodos para realizar estimaciones de costos de software más exactas. Por medio del método predictivo Delphi, un conjunto de expertos pertenecientes al sector de software de Barranquilla clasificaron y valoraron según la probabilidad de ocurrencia cinco escenarios realistas de estimaciones. Se diseñó un experimento completamente aleatorio cuyos resultados apuntaron a dos escenarios estadísticamente similares de manera cualitativa, con lo que se construyó un modelo de análisis basado en tres agentes: Metodología, capacidad del equipo de trabajo y productos tecnológicos; cada uno con tres categorías de cumplimiento para lograr estimaciones más precisas
Resumo:
The intensive character in knowledge of software production and its rising demand suggest the need to establish mechanisms to properly manage the knowledge involved in order to meet the requirements of deadline, costs and quality. The knowledge capitalization is a process that involves from identification to evaluation of the knowledge produced and used. Specifically, for software development, capitalization enables easier access, minimize the loss of knowledge, reducing the learning curve, avoid repeating errors and rework. Thus, this thesis presents the know-Cap, a method developed to organize and guide the capitalization of knowledge in software development. The Know-Cap facilitates the location, preservation, value addition and updating of knowledge, in order to use it in the execution of new tasks. The method was proposed from a set of methodological procedures: literature review, systematic review and analysis of related work. The feasibility and appropriateness of Know-Cap were analyzed from an application study, conducted in a real case, and an analytical study of software development companies. The results obtained indicate the Know- Cap supports the capitalization of knowledge in software development.
Resumo:
Some authors have shown the need of understanding the technological structuring process in contemporary firms. From this perspective, the software industry is a very important element because it provides products and services directly to many organizations from many fields. In this case, the Brazilian software industry has some peculiarities that distinguish it from other industries located in developed countries, which makes its understanding even more relevant. There is evidence that local firms take different strategies and structural configurations to enter into a market naturally dominated by large multinational firms. Therefore, this study aims to understand not only the structural configurations assumed by domestic firms but also the dynamic and the process that lead to these different configurations. To do so, this PhD dissertation investigates the institutional environment, its entities and the isomorphic movements, by employing an exploratory, descriptive and explanatory multiple cases study. Eight software development companies from the Recife's information technology Cluster were visited. Also, a form was applied and an interview with one of the main firm s professional was conducted. Although the study is predominantly qualitative, part of the data was analyzed through charts and graphs, providing a companies and environment overview that was very useful to analysis done through the interviews interpretation. As a result, it was realized that companies are structured around hybrids business models from two ideal types of software development companies, which are: software factory and technology-based company. Regarding the development process, it was found that there is a balanced distribution between the traditional and agile development paradigm. Among the traditional methodologies, the Rational Unified Process (RUP) is predominant. The Scrum is the most used methodology among the organizations based on the Agile Manifesto's principles. Regarding the structuring process, each institutional entity acts in such way that generates different isomorphic pressure. Emphasis was given to entities such as customers, research agencies, clusters, market-leading businesses, public universities, incubators, software industry organizations, technology vendors, development tool suppliers and manager s school and background because they relate themselves in a close way with the software firms. About this relationship, a dual and bilateral influence was found. Finally, the structuring level of the organizational field has been also identified as low, which gives a chance to organizational actors of acting independently
Resumo:
Requirements specification has long been recognized as critical activity in software development processes because of its impact on project risks when poorly performed. A large amount of studies addresses theoretical aspects, propositions of techniques, and recommended practices for Requirements Engineering (RE). To be successful, RE have to ensure that the specified requirements are complete and correct what means that all intents of the stakeholders in a given business context are covered by the requirements and that no unnecessary requirement was introduced. However, the accurate capture the business intents of the stakeholders remains a challenge and it is a major factor of software project failures. This master’s dissertation presents a novel method referred to as “Problem-Based SRS” aiming at improving the quality of the Software Requirements Specification (SRS) in the sense that the stated requirements provide suitable answers to real customer ́s businesses issues. In this approach, the knowledge about the software requirements is constructed from the knowledge about the customer ́s problems. Problem-Based SRS consists in an organization of activities and outcome objects through a process that contains five main steps. It aims at supporting the software requirements engineering team to systematically analyze the business context and specify the software requirements, taking also into account a first glance and vision of the software. The quality aspects of the specifications are evaluated using traceability techniques and axiomatic design principles. The cases studies conducted and presented in this document point out that the proposed method can contribute significantly to improve the software requirements specification.
Resumo:
Independientemente de la metodología que se adopte en el desarrollo de software, se contemplan aquellas actividades gerenciales o de dirección del proyecto y las inherentes a las técnicas, propias del desarrollo del producto como tal, como los requerimientos demandados, análisis, diseño, implementación y pruebas o ensayos previos a su materialización -- El presente trabajo se deriva del interés por diseñar una metodología para la gestión de la fase de pruebas y ensayo, con base en el modelo de integración de las actividades contempladas en la guía del PMBOK, la cual es compatible con las funciones de dirección y actividades técnicas de otras metodologías, especialmente en su etapa de prueba; de allí la importancia que representa para los gerentes de proyectos obtener resultados satisfactorios en esta fase, por su impacto directo y significativo en el cumplimiento del tiempo y los costos estimados, lo que permite prevenir o mitigar, tiempos adicionales o sobrecostos por reproceso, evitando ser transferidos al cliente o asumidos por el fabricante de software -- Así mismo, asegurar una ejecución correcta de la fase de pruebas y ensayo, garantiza que el proyecto responda a los estándares de calidad, de acuerdo con sus indicadores de medición y la satisfacción del usuario
Resumo:
A picture tells a thousand words. We all know that. Then why are our development tools showing mainly text with so much obstinacy? Even when visualizations do make it into our tools, they typically do not make it past the periphery. Something is deeply wrong. We argue that visualizations must become pervasive in software development, and to accommodate this goal, the integrated development environments must change significantly.
Resumo:
Doutoramento em Gestão
Resumo:
Scientific research is increasingly data-intensive, relying more and more upon advanced computational resources to be able to answer the questions most pressing to our society at large. This report presents findings from a brief descriptive survey sent to a sample of 342 leading researchers at the University of Washington (UW), Seattle, Washington in 2010 and 2011 as the first stage of the larger National Science Foundation project “Interacting with Cyberinfrastructure in the Face of Changing Science.” This survey assesses these researcher’s use of advanced computational resources, data, and software in their research. We present high-level findings that describe UW researchers’: demographics, interdisciplinarity, research groups, data use, software and computational use—including software development and use, data storage and transfer activities, and collaboration tools, and computing resources. These findings offer insights into the state of computational resources in use during this time period as well as offering a look at the data intensiveness of UW researchers.
Resumo:
Software is an important infrastructural component of scientific research practice. The work of research often requires scientists to develop, use, and share software in order to address their research questions. This report presents findings from a survey of researchers at the University of Washington in three broad areas: Oceanography, Biology, and Physics. This survey is part of the National Science Foundation funded study Scientists and their Software: A Sociotechnical Investigation of Scientific Software Development and Sharing (ACI-1302272). We inquired about each respondent’s research area and data use along with their use, development, and sharing of software. Finally, we asked about challenges researchers face with and about concerns regarding software’s effect on study replicability. These findings are part of ongoing efforts to develop deeper characterizations of the role of software in twenty-first century scientific research.
Resumo:
Limiti sempre più stringenti sulle emissioni inquinanti ed una maggiore attenzione ai consumi, all'incremento di prestazioni e alla guidabilità, portano allo sviluppo di algoritmi di controllo motore sempre più complicati. Allo stesso tempo, l'unità di propulsione sta diventando un insieme sempre più variegato di sottosistemi che devono lavorare all'unisono. L'ingegnere calibratore si trova di fronte ad una moltitudine di variabili ed algoritmi che devono essere calibrati e testati e necessita di strumenti che lo aiutino ad analizzare il comportamento del motore fornendo risultati sintetici e facilmente accessibili. Nel seguente lavoro è riportato lo sviluppo di un sistema di analisi della combustione: l'obbiettivo è stato quello di sviluppare un software che fornisca le migliori soluzioni per l'analisi di un motore a combustione interna, in termini di accuratezza dei risultati, varietà di calcoli messi a disposizione, facilità di utilizzo ed integrazione con altri sistemi tramite la condivisione dei risultati calcolati in tempo reale.
Resumo:
Background: High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment. Results: The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data. A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome. Conclusions: Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment.
Resumo:
Obesity has been recognized as a worldwide public health problem. It significantly increases the chances of developing several diseases, including Type II diabetes. The roles of insulin and leptin in obesity involve reactions that can be better understood when they are presented step by step. The aim of this work was to design software with data from some of the most recent publications on obesity, especially those concerning the roles of insulin and leptin in this metabolic disturbance. The most notable characteristic of this software is the use of animations representing the cellular response together with the presentation of recently discovered mechanisms on the participation of insulin and leptin in processes leading to obesity. The software was field tested in the Biochemistry of Nutrition web-based course. After using the software and discussing its contents in chatrooms, students were asked to answer an evaluation survey about the whole activity and the usefulness of the software within the learning process. The teaching assistants (TA) evaluated the software as a tool to help in the teaching process. The students' and TAs' satisfaction was very evident and encouraged us to move forward with the software development and to improve the use of this kind of educational tool in biochemistry classes.
Resumo:
The rise of component-based software development has created an urgent need for effective application program interface (API) documentation. Experience has shown that it is hard to create precise and readable documentation. Prose documentation can provide a good overview but lacks precision. Formal methods offer precision but the resulting documentation is expensive to develop. Worse, few developers have the skill or inclination to read formal documentation. We present a pragmatic solution to the problem of API documentation. We augment the prose documentation with executable test cases, including expected outputs, and use the prose plus the test cases as the documentation. With appropriate tool support, the test cases are easy to develop and read. Such test cases constitute a completely formal, albeit partial, specification of input/output behavior. Equally important, consistency between code and documentation is demonstrated by running the test cases. This approach provides an attractive bridge between formal and informal documentation. We also present a tool that supports compact and readable test cases; and generation of test drivers and documentation, and illustrate the approach with detailed case studies. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
Recent progresses in the software development world has assisted a change in hardware from heavy mainframes and desktop machines to unimaginable small devices leading to the prophetic "third computing paradigm", Ubiquitous Computing. Still, this novel unnoticeable devices lack in various capabilities, like computing power, storage capacity and human interface. Connectivity associated to this devices is also considered an handicap which comes generally associated expensive and limited protocols like GSM and UMTS. Considering this scenario as background, this paper presents a minimal communication protocol introducing better interfaces for limited devices. Special attention has been paid to the limitations of connectivity, storage capacity and scalability of the developed software applications. Illustrating this new protocol, a case-study is presented addressing car sensors communicating with a central