40 resultados para software quality metrics


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software quality management (SQM) is the collection of all processes that ensure that software products, services, and life cycle process implementations meet organizational software quality objectives and achieve stakeholder satisfaction. SQM comprises three basic subcategories: software quality planning, software quality assurance (SQA), and software quality control and software process improvement. This chapter provides a general overview of the SQA domain and discuss the related concept. A conceptual model for software quality framework is provided together with the current approaches for SQA. The chapter concludes with some of the identified challenges and future challenges regarding SQA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The risk of failure of the software development process remains high despite many attempts to improve the quality of software engineering. Contemporary approaches to process assurance, such as the capability maturity model have not prevented systemic failures, nor have project management methodologies provided guarantees of software quality. The paper proposes an approach to software quality assurance based on a knowledge mediated concurrent audit, which incorporates essential feedback processes. Through a tightly integrated approach to quality audit, programmers would be empowered to use any chosen methodology to advantage, supported by intelligent monitoring of the essential interactions which occur in the development process. An experimental application implementing some aspects of the proposal is described

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image fusion quality metrics have evolved from image processing quality metrics. They measure the quality of fused images by estimating how much localized information has been transferred from the source images into the fused image. However, this technique assumes that it is actually possible to fuse two images into one without any loss. In practice, some features must be sacrificed and relaxed in both source images. Relaxed features might be very important, like edges, gradients and texture elements. The importance of a certain feature is application dependant. This paper presents a new method for image fusion quality assessment. It depends on estimating how much valuable information has not been transferred.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent decades we have seen enormous increases in the capabilities of software intensive systems, resulting in exponential growth in their size and complexity. Software and systems engineers routinely develop systems with advanced functionalities that would not even have been conceived of 20 years ago. This observation was highlighted in the Critical Code report commissioned by the US Department of Defense in 2010, which identified a critical software engineering challenge as theability to deliver “software assurance in the presence of...architectural innovation and complexity, criticality with respect to safety, (and) overall complexity and scale”.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How to provide cost-effective strategies for Software Testing has been one of the research focuses in Software Engineering for a long time. Many researchers in Software Engineering have addressed the effectiveness and quality metric of Software Testing, and many interesting results have been obtained. However, one issue of paramount importance in software testing – the intrinsic imprecise and uncertain relationships within testing metrics – is left unaddressed. To this end, a new quality and effectiveness measurement based on fuzzy logic is proposed. The software quality features and analogy-based reasoning are discussed, which can deal with quality and effectiveness consistency between different test projects. Experimental results are also provided to verify the proposed measurement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How to provide cost-effective strategies for Software Testing has been one of the research focuses in Software Engineering for a long time. Many researchers in Software Engineering have addressed the effectiveness and quality metric of Software Testing, and many interesting results have been obtained. However, one issue of paramount importance in software testing — the intrinsic imprecise and uncertain relationships within testing metrics — is left unaddressed. To this end, a new quality and effectiveness measurement based on fuzzy logic is proposed. Related issues like the software quality features and fuzzy reasoning for test project similarity measurement are discussed, which can deal with quality and effectiveness consistency between different test projects. Experiments were conducted to verify the proposed measurement using real data from actual software testing projects. Experimental results show that the proposed fuzzy logic based metrics is effective and efficient to measure and evaluate the quality and effectiveness of test projects.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The pressures of modern manufacturing require that the quality-cost benefits are determined when evaluating new procedures or alternative operating policies. Traditionally, cost reports and other quality metrics have been used for this purpose. However, the interactions between the main quality cost drivers cannot be understood at the superficial level and the effect that a new process or an alternative operating policy may have on quality costs is difficult to determine. An alternative to the traditional costing methods is simulation. The current work uses simulation to evaluate quality costs in an automotive stamping plant where the quality control is determined by operator inspection of their own work. Self-inspection quality control provides instantaneous feedback of quality problems, allowing for quick rectification. However, the difficult nature of surface finish inspection of automotive panels can create inspection and control errors. A simulation model was developed to investigate the cost effects of inspection and control errors and it was found that inspection error had a significant effect in increasing total quality cost, with the magnitude of this increase dependent on the level of control. Further, the simulation found that the lowest cost quality control policy was that which allowed a number of defective panels to accumulate before resetting the press-line.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A theoretical framework is built for capturing properties of competition in mature monopolistic digital product markets. Based on an empirical study of the market of accounting software for small and medium enterprises, a consumer choice model is suggested, where a rational consumer is already using a particular version of a software package and is considering to chose from the following three options: either to continue using it, or to upgrade to a newer version of the product, or to switch to a competitive product. Consumer decision is justified by software quality, and network effects, under the price and switching costs constrains. A modified consumer demand function is used for the model, and theoretical conditions are analysed for choosing from one of the three above-mentioned options. The results are applicable to a wide range of digital products.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Evaluating the validity of formative variables has presented ongoing challenges for researchers. In this paper we use global criterion measures to compare and critically evaluate two alternative formative measures of System Quality. One model is based on the ISO-9126 software quality standard, and the other is based on a leading information systems research model. We find that despite both models having a strong provenance, many of the items appear to be non-significant in our study. We examine the implications of this by evaluating the quality of the criterion variables we used, and the performance of PLS when evaluating formative models with a large number of items. We find that our respondents had difficulty distinguishing between global criterion variables measuring different aspects of overall System Quality. Also, because formative indicators “compete with one another” in PLS, it may be difficult to develop a set of measures which are all significant for a complex formative construct with a broad scope and a large number of items. Overall, we suggest that there is cautious evidence that both sets of measures are valid and largely equivalent, although questions still remain about the measures, the use of criterion variables, and the use of PLS for this type of model evaluation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Reporting usability defects can be a challenging task, especially in convincing the software developers that the reported defect actually requires attention. Stronger evidence in the form of specific details is often needed. However, research to date in software defect reporting has not investigated the value of capturing different information based on defect type. We surveyed practitioners in both open source communities and industrial software organizations about their usability defect reporting practices to better understand information needs to address usability defect reporting issues. Our analysis of 147 responses show that reporters often provide observed result, expected result and steps to reproduce when describing usability defects, similar to the way other types of defects are reported. However, reporters rarely provide usability-related information. In fact, reporters ranked cause of the problem is the most difficult information to provide followed by usability principle, video recoding, UI event trace and title. Conversely, software developers consider cause of the problem as the most helpful information for them to fix usability defects. Our statistical analysis reveals a substantial gap between what reporters provide and what software developers need when fixing usability defects. We propose some remedies to resolve this gap.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: The journal impact factor (IF) has become widely used as an absolute measure of the quality of professional journals. It is also increasingly used as a tool for measuring the academic performance of researchers and to inform decisions concerning the appointment and tenure of academic staff as well as the viability of their departments/schools. In keeping with these IF-related trends, nurse researchers and faculty the world over are being increasingly expected to publish only in journals that have a high IF and to abandon all other forms of publishing (including books and book chapters) that do not attract IF rankings.

Issues: The IF obsession is placing in jeopardy the sustainability and hence viability of nursing journals and academic nursing publication lists (academic texts). If nurse authors abandon their publishing agenda and publish only in 'elite' journals (many of which may be outside nursing), the capacity of the nursing profession to develop and control the cutting edge of its disciplinary knowledge could be placed at risk.

Actions: Other means for assessing the quality and impact of nursing journals need to be devised. In addition, other works (such as books and book chapters) need also to be included in quality metrics. Nurse authors and journal editors must work together and devise ways to ensure the sustainability and viability of nursing publications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multi-frame super-resolution algorithms aim to increase spatial resolution by fusing information from several low-resolution perspectives of a scene. While a wide array of super-resolution algorithms now exist, the comparative capability of these techniques in practical scenarios has not been adequately explored. In addition, a standard quantitative method for assessing the relative merit of super-resolution algorithms is required. This paper presents a comprehensive practical comparison of existing super-resolution techniques using a shared platform and 4 common greyscale reference images. In total, 13 different super-resolution algorithms are evaluated, and as accurate alignment is critical to the super-resolution process, 6 registration algorithms are also included in the analysis. Pixel-based visual information fidelity (VIFP) is selected from the 12 image quality metrics reviewed as the measure most suited to the appraisal of super-resolved images. Experimental results show that Bayesian super-resolution methods utilizing the simultaneous autoregressive (SAR) prior produce the highest quality images when combined with generalized stochastic Lucas-Kanade optical flow registration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Continuous content management of health information portals is a feature vital for its sustainability and widespread acceptance. Knowledge and experience of a domain expert is essential for content management in the health domain. The rate of generation of online health resources is exponential and thereby manual examination for relevance to a specific topic and audience is a formidable challenge for domain experts. Intelligent content discovery for effective content management is a less researched topic. An existing expert-endorsed content repository can provide the necessary leverage to automatically identify relevant resources and evaluate qualitative metrics.Objective: This paper reports on the design research towards an intelligent technique for automated content discovery and ranking for health information portals. The proposed technique aims to improve efficiency of the current mostly manual process of portal content management by utilising an existing expert-endorsed content repository as a supporting base and a benchmark to evaluate the suitability of new contentMethods: A model for content management was established based on a field study of potential users. The proposed technique is integral to this content management model and executes in several phases (ie, query construction, content search, text analytics and fuzzy multi-criteria ranking). The construction of multi-dimensional search queries with input from Wordnet, the use of multi-word and single-word terms as representative semantics for text analytics and the use of fuzzy multi-criteria ranking for subjective evaluation of quality metrics are original contributions reported in this paper.Results: The feasibility of the proposed technique was examined with experiments conducted on an actual health information portal, the BCKOnline portal. Both intermediary and final results generated by the technique are presented in the paper and these help to establish benefits of the technique and its contribution towards effective content management.Conclusions: The prevalence of large numbers of online health resources is a key obstacle for domain experts involved in content management of health information portals and websites. The proposed technique has proven successful at search and identification of resources and the measurement of their relevance. It can be used to support the domain expert in content management and thereby ensure the health portal is up-to-date and current.