154 resultados para Computer software - Quality control
Resumo:
This paper presents a systematic approach to proving temporal properties of arbitrary Z specifications. The approach involves (i) transforming the Z specification to an abstract temporal structure (or state transition system), (ii) applying a model checker to the temporal structure, (iii) determining whether the temporal structure is too abstract based on the model checking result and (iv) refining the temporal structure where necessary. The approach is based on existing work from the model checking literature, adapting it to Z.
Resumo:
This paper examines the effects of information request ambiguity and construct incongruence on end user's ability to develop SQL queries with an interactive relational database query language. In this experiment, ambiguity in information requests adversely affected accuracy and efficiency. Incongruities among the information request, the query syntax, and the data representation adversely affected accuracy, efficiency, and confidence. The results for ambiguity suggest that organizations might elicit better query development if end users were sensitized to the nature of ambiguities that could arise in their business contexts. End users could translate natural language queries into pseudo-SQL that could be examined for precision before the queries were developed. The results for incongruence suggest that better query development might ensue if semantic distances could be reduced by giving users data representations and database views that maximize construct congruence for the kinds of queries in typical domains. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This paper presents a method of formally specifying, refining and verifying concurrent systems which uses the object-oriented state-based specification language Object-Z together with the process algebra CSP. Object-Z provides a convenient way of modelling complex data structures needed to define the component processes of such systems, and CSP enables the concise specification of process interactions. The basis of the integration is a semantics of Object-Z classes identical to that of CSP processes. This allows classes specified in Object-Z to he used directly within the CSP part of the specification. In addition to specification, we also discuss refinement and verification in this model. The common semantic basis enables a unified method of refinement to be used, based upon CSP refinement. To enable state-based techniques to be used fur the Object-Z components of a specification we develop state-based refinement relations which are sound and complete with respect to CSP refinement. In addition, a verification method for static and dynamic properties is presented. The method allows us to verify properties of the CSP system specification in terms of its component Object-Z classes by using the laws of the the CSP operators together with the logic for Object-Z.
Resumo:
This paper presents the multi-threading and internet message communication capabilities of Qu-Prolog. Message addresses are symbolic and the communications package provides high-level support that completely hides details of IP addresses and port numbers as well as the underlying TCP/IP transport layer. The combination of the multi-threads and the high level inter-thread message communications provide simple, powerful support for implementing internet distributed intelligent applications.
Resumo:
The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.
Resumo:
Wet agglomeration processes have traditionally been considered an empirical art, with great difficulties in predicting and explaining observed behaviour. Industry has faced a range of problems including large recycle ratios, poor product quality control, surging and even the total failure of scale up from laboratory to full scale production. However, in recent years there has been a rapid advancement in our understanding of the fundamental processes that control granulation behaviour and product properties. This review critically evaluates the current understanding of the three key areas of wet granulation processes: wetting and nucleation, consolidation and growth, and breakage and attrition. Particular emphasis is placed on the fact that there now exist theoretical models which predict or explain the majority of experimentally observed behaviour. Provided that the correct material properties and operating parameters are known, it is now possible to make useful predictions about how a material will granulate. The challenge that now faces us is to transfer these theoretical developments into industrial practice. Standard, reliable methods need to be developed to measure the formulation properties that control granulation behaviour, such as contact angle and dynamic yield strength. There also needs to be a better understanding of the flow patterns, mixing behaviour and impact velocities in different types of granulation equipment. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
A data warehouse is a data repository which collects and maintains a large amount of data from multiple distributed, autonomous and possibly heterogeneous data sources. Often the data is stored in the form of materialized views in order to provide fast access to the integrated data. One of the most important decisions in designing a data warehouse is the selection of views for materialization. The objective is to select an appropriate set of views that minimizes the total query response time with the constraint that the total maintenance time for these materialized views is within a given bound. This view selection problem is totally different from the view selection problem under the disk space constraint. In this paper the view selection problem under the maintenance time constraint is investigated. Two efficient, heuristic algorithms for the problem are proposed. The key to devising the proposed algorithms is to define good heuristic functions and to reduce the problem to some well-solved optimization problems. As a result, an approximate solution of the known optimization problem will give a feasible solution of the original problem. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Qualitative data analysis (QDA) is often a time-consuming and laborious process usually involving the management of large quantities of textual data. Recently developed computer programs offer great advances in the efficiency of the processes of QDA. In this paper we report on an innovative use of a combination of extant computer software technologies to further enhance and simplify QDA. Used in appropriate circumstances, we believe that this innovation greatly enhances the speed with which theoretical and descriptive ideas can be abstracted from rich, complex, and chaotic qualitative data. © 2001 Human Sciences Press, Inc.
Resumo:
The focus of rapid diagnosis of infectious diseases of children in the last decade has shifted from variations of the conventional laboratory techniques of antigen detection, microscopy and culture to that of molecular diagnosis of infectious agents. Pediatricians will need to be able to interpret the use, limitations and results of molecular diagnostic techniques as they are increasingly integrated into routine clinical microbiology laboratory protocols. PCR is the best known and most successfully implemented diagnostic molecular technology to date. It can detect specific infectious agents and determine their virulence and antimicrobial genotypes with greater speed, sensitivity and specificity than conventional microbiology methods. Inherent technical limitations of PCR are present, although they are reduced in laboratories that follow suitable validation and quality control procedures. Variations of PCR together with advances in nucleic acid amplification technology have broadened its diagnostic capabilities in clinical infectious disease to now rival and even surpass traditional methods in some situations. Automation of all components of PCR is now possible. The completion of the genome sequencing projects for significant microbial pathogens, in combination with PCR and DNA chip technology, will revolutionize the diagnosis and management of infectious diseases.