954 resultados para software engineering: metrics


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main purpose of this paper is to propose and test a model to assess the degree of conditions favorability in the adoption of agile methods to develop software where traditional methods predominate. In order to achieve this aim, a survey was applied on software developers of a Brazilian public retail bank. Two different statistical techniques were used in order to assess the quantitative data from the closed questions in the survey. The first, exploratory factorial analysis validated the structure of perspectives related to the agile model of the proposed assessment. The second, frequency distribution analysis to categorize the answers. Qualitative data from the survey opened question were analyzed with the technique of qualitative thematic content analysis. As a result, the paper proposes a model to assess the degree of favorability conditions in the adoption of Agile practices within the context of the proposed study.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent decades we have seen enormous increases in the capabilities of software intensive systems, resulting in exponential growth in their size and complexity. Software and systems engineers routinely develop systems with advanced functionalities that would not even have been conceived of 20 years ago. This observation was highlighted in the Critical Code report commissioned by the US Department of Defense in 2010, which identified a critical software engineering challenge as theability to deliver “software assurance in the presence of...architectural innovation and complexity, criticality with respect to safety, (and) overall complexity and scale”.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Software quality management (SQM) is the collection of all processes that ensure that software products, services, and life cycle process implementations meet organizational software quality objectives and achieve stakeholder satisfaction. SQM comprises three basic subcategories: software quality planning, software quality assurance (SQA), and software quality control and software process improvement. This chapter provides a general overview of the SQA domain and discuss the related concept. A conceptual model for software quality framework is provided together with the current approaches for SQA. The chapter concludes with some of the identified challenges and future challenges regarding SQA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Managing Trade-Offs in Adaptable Software Architectures explores the latest research on adapting large complex systems to changing requirements. To be able to adapt a system, engineers must evaluate different quality attributes, including trade-offs to balance functional and quality requirements to maintain a well-functioning system throughout the lifetime of the system. This comprehensive book brings together research focusing on how to manage trade-offs to help engineers architect adaptive systems in different business contexts, including state-of-the-art techniques, methodologies, tools, best practices, guidelines and guidance on future software engineering research and practice.Each contributed chapter considers the practical application of the topic through case studies, experiments, empirical validation, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to, how to architect a system for adaptability, software architecture for self-adaptive systems, understanding and balancing the trade-offs involved, architectural patterns for self-adaptive systems, how quality attributes are exhibited by the architecture of the system, how to connect the quality of a software architecture to system architecture or other system considerations, and more.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two distinct maintenance-data-models are studied: a government Enterprise Resource Planning (ERP) maintenance-data-model, and the Software Engineering Industries (SEI) maintenance-data-model. The objective is to: (i) determine whether the SEI maintenance-data-model is sufficient in the context of ERP (by comparing with an ERP case), (ii) identify whether the ERP maintenance-data-model in this study has adequately captured the essential and common maintenance attributes (by comparing with the SEI), and (iii) proposed a new ERP maintenance-data-model as necessary. Our findings suggest that: (i) there are variations to the SEI model in an ERP-context, and (ii) there are rooms for improvements in our ERP case’s maintenance-data-model. Thus, a new ERP maintenance-data-model capturing the fundamental ERP maintenance attributes is proposed. This model is imperative for: (i) enhancing the reporting and visibility of maintenance activities, (ii) monitoring of the maintenance problems, resolutions and performance, and (iii) helping maintenance manager to better manage maintenance activities and make well-informed maintenance decisions.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ordinary desktop computers continue to obtain ever more resources – in-creased processing power, memory, network speed and bandwidth – yet these resources spend much of their time underutilised. Cycle stealing frameworks harness these resources so they can be used for high-performance computing. Traditionally cycle stealing systems have used client-server based architectures which place significant limits on their ability to scale and the range of applica-tions they can support. By applying a fully decentralised network model to cycle stealing the limits of centralised models can be overcome. Using decentralised networks in this manner presents some difficulties which have not been encountered in their previous uses. Generally decentralised ap-plications do not require any significant fault tolerance guarantees. High-performance computing on the other hand requires very stringent guarantees to ensure correct results are obtained. Unfortunately mechanisms developed for traditional high-performance computing cannot be simply translated because of their reliance on a reliable storage mechanism. In the highly dynamic world of P2P computing this reliable storage is not available. As part of this research a fault tolerance system has been created which provides considerable reliability without the need for a persistent storage. As well as increased scalability, fully decentralised networks offer the ability for volunteers to communicate directly. This ability provides the possibility of supporting applications whose tasks require direct, message passing style communication. Previous cycle stealing systems have only supported embarrassingly parallel applications and applications with limited forms of communication so a new programming model has been developed which can support this style of communication within a cycle stealing context. In this thesis I present a fully decentralised cycle stealing framework. The framework addresses the problems of providing a reliable fault tolerance sys-tem and supporting direct communication between parallel tasks. The thesis includes a programming model for developing cycle stealing applications with direct inter-process communication and methods for optimising object locality on decentralised networks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Search engines have forever changed the way people access and discover knowledge, allowing information about almost any subject to be quickly and easily retrieved within seconds. As increasingly more material becomes available electronically the influence of search engines on our lives will continue to grow. This presents the problem of how to find what information is contained in each search engine, what bias a search engine may have, and how to select the best search engine for a particular information need. This research introduces a new method, search engine content analysis, in order to solve the above problem. Search engine content analysis is a new development of traditional information retrieval field called collection selection, which deals with general information repositories. Current research in collection selection relies on full access to the collection or estimations of the size of the collections. Also collection descriptions are often represented as term occurrence statistics. An automatic ontology learning method is developed for the search engine content analysis, which trains an ontology with world knowledge of hundreds of different subjects in a multilevel taxonomy. This ontology is then mined to find important classification rules, and these rules are used to perform an extensive analysis of the content of the largest general purpose Internet search engines in use today. Instead of representing collections as a set of terms, which commonly occurs in collection selection, they are represented as a set of subjects, leading to a more robust representation of information and a decrease of synonymy. The ontology based method was compared with ReDDE (Relevant Document Distribution Estimation method for resource selection) using the standard R-value metric, with encouraging results. ReDDE is the current state of the art collection selection method which relies on collection size estimation. The method was also used to analyse the content of the most popular search engines in use today, including Google and Yahoo. In addition several specialist search engines such as Pubmed and the U.S. Department of Agriculture were analysed. In conclusion, this research shows that the ontology based method mitigates the need for collection size estimation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Computational biology increasingly demands the sharing of sophisticated data and annotations between research groups. Web 2.0 style sharing and publication requires that biological systems be described in well-defined, yet flexible and extensible formats which enhance exchange and re-use. In contrast to many of the standards for exchange in the genomic sciences, descriptions of biological sequences show a great diversity in format and function, impeding the definition and exchange of sequence patterns. In this presentation, we introduce BioPatML, an XML-based pattern description language that supports a wide range of patterns and allows the construction of complex, hierarchically structured patterns and pattern libraries. BioPatML unifies the diversity of current pattern description languages and fills a gap in the set of XML-based description languages for biological systems. We discuss the structure and elements of the language, and demonstrate its advantages on a series of applications, showing lightweight integration between the BioPatML parser and search engine, and the SilverGene genome browser. We conclude by describing our site to enable large scale pattern sharing, and our efforts to seed this repository.