896 resultados para software quality metrics


Relevância:

80.00% 80.00%

Publicador:

Resumo:

If the land sector is to make significant contributions to mitigating anthropogenic greenhouse gas (GHG) emissions in coming decades, it must do so while concurrently expanding production of food and fiber. In our view, mathematical modeling will be required to provide scientific guidance to meet this challenge. In order to be useful in GHG mitigation policy measures, models must simultaneously meet scientific, software engineering, and human capacity requirements. They can be used to understand GHG fluxes, to evaluate proposed GHG mitigation actions, and to predict and monitor the effects of specific actions; the latter applications require a change in mindset that has parallels with the shift from research modeling to decision support. We compare and contrast 6 agro-ecosystem models (FullCAM, DayCent, DNDC, APSIM, WNMM, and AgMod), chosen because they are used in Australian agriculture and forestry. Underlying structural similarities in the representations of carbon flows though plants and soils in these models are complemented by a diverse range of emphases and approaches to the subprocesses within the agro-ecosystem. None of these agro-ecosystem models handles all land sector GHG fluxes, and considerable model-based uncertainty exists for soil C fluxes and enteric methane emissions. The models also show diverse approaches to the initialisation of model simulations, software implementation, distribution, licensing, and software quality assurance; each of these will differentially affect their usefulness for policy-driven GHG mitigation prediction and monitoring. Specific requirements imposed on the use of models by Australian mitigation policy settings are discussed, and areas for further scientific development of agro-ecosystem models for use in GHG mitigation policy are proposed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis has investigated how to cluster a large number of faces within a multi-media corpus in the presence of large session variation. Quality metrics are used to select the best faces to represent a sequence of faces; and session variation modelling improves clustering performance in the presence of wide variations across videos. Findings from this thesis contribute to improving the performance of both face verification systems and the fully automated clustering of faces from a large video corpus.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis studies human gene expression space using high throughput gene expression data from DNA microarrays. In molecular biology, high throughput techniques allow numerical measurements of expression of tens of thousands of genes simultaneously. In a single study, this data is traditionally obtained from a limited number of sample types with a small number of replicates. For organism-wide analysis, this data has been largely unavailable and the global structure of human transcriptome has remained unknown. This thesis introduces a human transcriptome map of different biological entities and analysis of its general structure. The map is constructed from gene expression data from the two largest public microarray data repositories, GEO and ArrayExpress. The creation of this map contributed to the development of ArrayExpress by identifying and retrofitting the previously unusable and missing data and by improving the access to its data. It also contributed to creation of several new tools for microarray data manipulation and establishment of data exchange between GEO and ArrayExpress. The data integration for the global map required creation of a new large ontology of human cell types, disease states, organism parts and cell lines. The ontology was used in a new text mining and decision tree based method for automatic conversion of human readable free text microarray data annotations into categorised format. The data comparability and minimisation of the systematic measurement errors that are characteristic to each lab- oratory in this large cross-laboratories integrated dataset, was ensured by computation of a range of microarray data quality metrics and exclusion of incomparable data. The structure of a global map of human gene expression was then explored by principal component analysis and hierarchical clustering using heuristics and help from another purpose built sample ontology. A preface and motivation to the construction and analysis of a global map of human gene expression is given by analysis of two microarray datasets of human malignant melanoma. The analysis of these sets incorporate indirect comparison of statistical methods for finding differentially expressed genes and point to the need to study gene expression on a global level.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rate control regulates the instantaneous video bit -rate to maximize a picture quality metric while satisfying channel constraints. Typically, a quality metric such as Peak Signalto-Noise ratio (PSNR) or weighted signal -to-noise ratio(WSNR) is chosen out of convenience. However this metric is not always truly representative of perceptual video quality.Attempts to use perceptual metrics in rate control have been limited by the accuracy of the video quality metrics chosen.Recently, new and improved metrics of subjective quality such as the Video quality experts group's (VQEG) NTIA1 General Video Quality Model (VQM) have been proven to have strong correlation with subjective quality. Here, we apply the key principles of the NTIA -VQM model to rate control in order to maximize perceptual video quality. Our experiments demonstrate that applying NTIA -VQM motivated metrics to standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 / MSE based implementation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

质量形成于过程.以预防为主的过程管理思想在软件产业引入了软件的工程过程、管理过程和支持过程三类基本过程,以过程为中心的软件开发、生产与质量管理是现代软件产业的时代特征.本文阐述了软件质量管理的基本原理,提出了一个基于CMM过程管理控制的软件质量管理模型及平台,帮助软件组织达到较高的成熟度水平.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

统计过程控制(SPC, Statistical Process Control)是一种借助数理统计工具的过程控制方法,它利用统计工具及技术(如控制图)对过程或过程输出进行分析,找出过程中的不确定因素并及时消除,控制、管理、改进过程产品的质量或过程能力,从而达到保证产品质量的目的。它能帮助用户采取适当措施来保证过程处于统计意义的受控状态,并且帮助用户提高生产能力,以满足或超越顾客的期望。SPC最早是在工业界提出的,在工业界的成功也促使它被应用于其它许多商业领域。本文主要探讨统计过程控制在软件质量管理中的实施。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

中国计算机学会

Relevância:

80.00% 80.00%

Publicador:

Resumo:

中国计算机学会

Relevância:

80.00% 80.00%

Publicador:

Resumo:

针对软件质量评价研究中的度量问题建立了基于ISO/IEC 9126标准的软件质量指标体系模型,结合质量评价方法研究中的常见问题,运用指标体系模型和模糊数学方法对软件质量评价标准进行模糊化处理,以度量数据为基础,根据软件质量子特性和指标之间的关系,采用模糊综合评价方法评价子特性质量和特性质量,通过建立软件质量评价模型,有效地解决了软件质量的多指标评价问题,可用于指导用户进行软件过程改进.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

针对现有的软件评审、软件测试支持工具在软件缺陷数据统计支持上的局限性,提出了一种集成软件测试和软件评审的软件质量控制活动模型,从缺陷数据管理的角度将以上两种质量控制手段结合起来,提高了缺陷数据对于软件项目数据分析的价值,同时也提高了软件质量控制活动本身的效率。介绍了软件质量控制支持工具SQC,并对该工具的设计和实现进行详细的说明。

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: Parrots belong to a group of behaviorally advanced vertebrates and have an advanced ability of vocal learning relative to other vocal-learning birds. They can imitate human speech, synchronize their body movements to a rhythmic beat, and understand complex concepts of referential meaning to sounds. However, little is known about the genetics of these traits. Elucidating the genetic bases would require whole genome sequencing and a robust assembly of a parrot genome. FINDINGS: We present a genomic resource for the budgerigar, an Australian Parakeet (Melopsittacus undulatus) -- the most widely studied parrot species in neuroscience and behavior. We present genomic sequence data that includes over 300× raw read coverage from multiple sequencing technologies and chromosome optical maps from a single male animal. The reads and optical maps were used to create three hybrid assemblies representing some of the largest genomic scaffolds to date for a bird; two of which were annotated based on similarities to reference sets of non-redundant human, zebra finch and chicken proteins, and budgerigar transcriptome sequence assemblies. The sequence reads for this project were in part generated and used for both the Assemblathon 2 competition and the first de novo assembly of a giga-scale vertebrate genome utilizing PacBio single-molecule sequencing. CONCLUSIONS: Across several quality metrics, these budgerigar assemblies are comparable to or better than the chicken and zebra finch genome assemblies built from traditional Sanger sequencing reads, and are sufficient to analyze regions that are difficult to sequence and assemble, including those not yet assembled in prior bird genomes, and promoter regions of genes differentially regulated in vocal learning brain regions. This work provides valuable data and material for genome technology development and for investigating the genomics of complex behavioral traits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Use of structuring mechanisms (such as modularisation) is widely believed to be one of the key ways to improve software quality. Structuring is considered to be at least as important for specification documents as for source code, since it is assumed to improve comprehensibility. Yet, as with most widely held assumptions in software engineering, there is little empirical evidence to support this hypothesis. Also, even if structuring can be shown to he a good thing, we do not know how much structuring is somehow optimal. One of the more popular formal specification languages, Z, encourages structuring through its schema calculus. A controlled experiment is described in which two hypotheses about the effects of structure on the comprehensibility of Z specifications are tested. Evidence was found that structuring a specification into schemas of about 20 lines long significantly improved comprehensibility over a monolithic specification. However, there seems to be no perceived advantage in breaking down the schemas into much smaller components. The experiment can he fully replicated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The renewed concern in assessing risks and consequences from technological hazards in industrial and urban areas continues emphasizing the development of local-scale consequence analysis (CA) modelling tools able to predict shortterm pollution episodes and exposure effects on humans and the environment in case of accident with hazardous gases (hazmat). In this context, the main objective of this thesis is the development and validation of the EFfects of Released Hazardous gAses (EFRHA) model. This modelling tool is designed to simulate the outflow and atmospheric dispersion of heavy and passive hazmat gases in complex and build-up areas, and to estimate the exposure consequences of short-term pollution episodes in accordance to regulatory/safety threshold limits. Five main modules comprising up-to-date methods constitute the model: meteorological, terrain, source term, dispersion, and effects modules. Different initial physical states accident scenarios can be examined. Considered the main core of the developed tool, the dispersion module comprises a shallow layer modelling approach capable to account the main influence of obstacles during the hazmat gas dispersion phenomena. Model validation includes qualitative and quantitative analyses of main outputs by the comparison of modelled results against measurements and/or modelled databases. The preliminary analysis of meteorological and source term modules against modelled outputs from extensively validated models shows the consistent description of ambient conditions and the variation of the hazmat gas release. Dispersion is compared against measurements observations in obstructed and unobstructed areas for different release and dispersion scenarios. From the performance validation exercise, acceptable agreement was obtained, showing the reasonable numerical representation of measured features. In general, quality metrics are within or close to the acceptance limits recommended for ‘non-CFD models’, demonstrating its capability to reasonably predict hazmat gases accidental release and atmospheric dispersion in industrial and urban areas. EFRHA model was also applied to a particular case study, the Estarreja Chemical Complex (ECC), for a set of accidental release scenarios within a CA scope. The results show the magnitude of potential effects on the surrounding populated area and influence of the type of accident and the environment on the main outputs. Overall the present thesis shows that EFRHA model can be used as a straightforward tool to support CA studies in the scope of training and planning, but also, to support decision and emergency response in case of hazmat gases accidental release in industrial and built-up areas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação de natureza científica para obtenção do grau de Mestre em Engenharia Informática e de Computadores