54 resultados para Verification and validation technology
em University of Queensland eSpace - Australia
Resumo:
Research in verification and validation (V&V) for concurrent programs can be guided by practitioner information. A survey was therefore run to gain state-of-practice information in this context. The survey presented in this paper collected state-of-practice information on V&V technology in concurrency from 35 respondents. The results of the survey can help refine existing V&V technology by providing a better understanding of the context of V&V technology usage. Responses to questions regarding the motivation for selecting V&V technologies can help refine a systematic approach to V&V technology selection.
Resumo:
It is not surprising that students are unconvinced about the benefits of formal methods if we do not show them how these methods can be integrated with other activities in the software lifecycle. In this paper, we describe an approach to integrating formal specification with more traditional verification and validation techniques in a course that teaches formal specification and specification-based testing. This is accomplished through a series of assignments on a single software component that involves specifying the component in Object-Z, validating that specification using inspection and a specification animation tool, and then testing an implementation of the specification using test cases derived from the formal specification.
Resumo:
NASA is working on complex future missions that require cooperation between multiple satellites or rovers. To implement these systems, developers are proposing and using intelligent and autonomous systems. These autonomous missions are new to NASA, and the software development community is just learning to develop such systems. With these new systems, new verification and validation techniques must be used. Current techniques have been developed based on large monolithic systems. These techniques have worked well and reliably, but do not translate to the new autonomous systems that are highly parallel and nondeterministic.
Resumo:
This paper summarises test results that were used to validate a model and scale-up procedure of the high pressure grinding roll (HPGR) which was developed at the JKMRC by Morrell et al. [Morrell, Lim, Tondo, David,1996. Modelling the high pressure grinding rolls. In: Mining Technology Conference, pp. 169-176.]. Verification of the model is based on results from four data sets that describe the performance of three industrial scale units fitted with both studded and smooth roll surfaces. The industrial units are currently in operation within the diamond mining industry and are represented by De Beers, BHP Billiton and Rio Tinto. Ore samples from the De Beers and BHP Billiton operations were sent to the JKMRC for ore characterisation and HPGR laboratory-scale tests. Rio Tinto contributed an historical data set of tests completed during a previous research project. The results conclude that the modelling of the HPGR process has matured to a point where the model may be used to evaluate new and to optimise existing comminution circuits. The model prediction of product size distribution is good and has been found to be strongly dependent of the characteristics of the material being tested. The prediction of throughput and corresponding power draw (based on throughput) is sensitive to inconsistent gap/diameter ratios observed between laboratory-scale tests and full-scale operations. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Workflow systems have traditionally focused on the so-called production processes which are characterized by pre-definition, high volume, and repetitiveness. Recently, the deployment of workflow systems in non-traditional domains such as collaborative applications, e-learning and cross-organizational process integration, have put forth new requirements for flexible and dynamic specification. However, this flexibility cannot be offered at the expense of control, a critical requirement of business processes. In this paper, we will present a foundation set of constraints for flexible workflow specification. These constraints are intended to provide an appropriate balance between flexibility and control. The constraint specification framework is based on the concept of pockets of flexibility which allows ad hoc changes and/or building of workflows for highly flexible processes. Basically, our approach is to provide the ability to execute on the basis of a partially specified model, where the full specification of the model is made at runtime, and may be unique to each instance. The verification of dynamically built models is essential. Where as ensuring that the model conforms to specified constraints does not pose great difficulty, ensuring that the constraint set itself does not carry conflicts and redundancy is an interesting and challenging problem. In this paper, we will provide a discussion on both the static and dynamic verification aspects. We will also briefly present Chameleon, a prototype workflow engine that implements these concepts. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given..
Resumo:
The results of empirical studies are limited to particular contexts, difficult to generalise and the studies themselves are expensive to perform. Despite these problems, empirical studies in software engineering can be made effective and they are important to both researchers and practitioners. The key to their effectiveness lies in the maximisation of the information that can be gained by examining existing studies, conducting power analyses for an accurate minimum sample size and benefiting from previous studies through replication. This approach was applied in a controlled experiment examining the combination of automated static analysis tools and code inspection in the context of verification and validation (V&V) of concurrent Java components. The combination of these V&V technologies was shown to be cost-effective despite the size of the study, which thus contributes to research in V&V technology evaluation.
Resumo:
While a considerable number of candidate Myb target genes have been reported to date, most of these are likely to play little or no role in transformation by myb oncogenes. Here we have used a conditionally myb-transformed myeloid cell line (ERMYB) to further examine Myb regulation of one candidate target gene-c-myc-that has the potential to affect cell proliferation. It was found that the major influence on c-myc expression was the presence of cytokine (GM-CSF) rather than Myb activity. We also describe the application of PCR-based subtractive hybridization and low-density cDNA array screening, in conjunction with the ERMYB line, to the identification of additional Myb target genes. Preliminary identification of a number of candidates is reported; these include myeloperoxidase, which is known to have essential Myb-binding sites in its regulatory region. (C) 2001 Academic Press.
Resumo:
Background: Microarray transcript profiling has the potential to illuminate the molecular processes that are involved in the responses of cattle to disease challenges. This knowledge may allow the development of strategies that exploit these genes to enhance resistance to disease in an individual or animal population. Results: The Bovine Innate Immune Microarray developed in this study consists of 1480 characterised genes identified by literature searches, 31 positive and negative control elements and 5376 cDNAs derived from subtracted and normalised libraries. The cDNA libraries were produced from 'challenged' bovine epithelial and leukocyte cells. The microarray was found to have a limit of detection of 1 pg/mu g of total RNA and a mean slide-to-slide correlation co-efficient of 0.88. The profiles of differentially expressed genes from Concanavalin A ( ConA) stimulated bovine peripheral blood lymphocytes were determined. Three distinct profiles highlighted 19 genes that were rapidly up-regulated within 30 minutes and returned to basal levels by 24 h; 76 genes that were upregulated between 2 - 8 hours and sustained high levels of expression until 24 h and 10 genes that were down-regulated. Quantitative real-time RT-PCR on selected genes was used to confirm the results from the microarray analysis. The results indicate that there is a dynamic process involving gene activation and regulatory mechanisms re-establishing homeostasis in the ConA activated lymphocytes. The Bovine Innate Immune Microarray was also used to determine the cross-species hybridisation capabilities of an ovine PBL sample. Conclusion: The Bovine Innate Immune Microarray has been developed which contains a set of well-characterised genes and anonymous cDNAs from a number of different bovine cell types. The microarray can be used to determine the gene expression profiles underlying innate immune responses in cattle and sheep.
Resumo:
This chapter examines the relationship between globalisation and technological progress. It computes an annual and country specific measure of technological gap, the technology ratio (TGR), using a recently proposed method known as metafrontiers. The TGR is measured as the distance from a group frontier to the global (or meta) frontier. The TGRs provide a measure to compare technological capability across countries. The ranking obtained from the metafrontiers method is first compared to other methods based on the direct measure of patents, science articles, schooling etc. The TGRs are then related to levels of trade openness and inbound and outbound foreign direct investment within regions and overtime in an effort to identify the relationship between technological gap and outward orientation.