116 resultados para validation process
em University of Queensland eSpace - Australia
Resumo:
Medical microbiology and virology laboratories use nucleic acid tests (NAT) to detect genomic material of infectious organisms in clinical samples. Laboratories choose to perform assembled (or in-house) NAT if commercial assays are not available or if assembled NAT are more economical or accurate. One reason commercial assays are more expensive is because extensive validation is necessary before the kit is marketed, as manufacturers must accept liability for the performance of their assays, assuming their instructions are followed. On the other hand, it is a particular laboratory's responsibility to validate an assembled NAT prior to using it for testing and reporting results on human samples. There are few published guidelines for the validation of assembled NAT. One procedure that laboratories can use to establish a validation process for an assay is detailed in this document. Before validating a method, laboratories must optimise it and then document the protocol. All instruments must be calibrated and maintained throughout the testing process. The validation process involves a series of steps including: (i) testing of dilution series of positive samples to determine the limits of detection of the assay and their linearity over concentrations to be measured in quantitative NAT; (ii) establishing the day-to-day variation of the assay's performance; (iii) evaluating the sensitivity and specificity of the assay as far as practicable, along with the extent of cross-reactivity with other genomic material; and (iv) assuring the quality of assembled assays using quality control procedures that monitor the performance of reagent batches before introducing new lots of reagent for testing.
Resumo:
In the context of cancer diagnosis and treatment, we consider the problem of constructing an accurate prediction rule on the basis of a relatively small number of tumor tissue samples of known type containing the expression data on very many (possibly thousands) genes. Recently, results have been presented in the literature suggesting that it is possible to construct a prediction rule from only a few genes such that it has a negligible prediction error rate. However, in these results the test error or the leave-one-out cross-validated error is calculated without allowance for the selection bias. There is no allowance because the rule is either tested on tissue samples that were used in the first instance to select the genes being used in the rule or because the cross-validation of the rule is not external to the selection process; that is, gene selection is not performed in training the rule at each stage of the cross-validation process. We describe how in practice the selection bias can be assessed and corrected for by either performing a cross-validation or applying the bootstrap external to the selection process. We recommend using 10-fold rather than leave-one-out cross-validation, and concerning the bootstrap, we suggest using the so-called. 632+ bootstrap error estimate designed to handle overfitted prediction rules. Using two published data sets, we demonstrate that when correction is made for the selection bias, the cross-validated error is no longer zero for a subset of only a few genes.
Resumo:
A concept has been developed where characteristic load cycles of longwall shields can describe most of the interaction between a longwall support and the roof. A characteristic load cycle is the change in support pressure with time from setting the support against the roof to the next release and movement of the support. The concept has been validated through the back-analysis of more than 500 000 individual load cycles in five longwall panels at four mines and seven geotechnical domains. The validation process depended upon the development of new software capable of both handling the large quantity of data emanating from a modern longwall and accurately delineating load cycles. Existing software was found not to be capable of delineating load cycles to a sufficient accuracy. Load-cycle analysis can now be used quantitatively to assess the adequacy of support capacity and the appropriateness of set pressure for the conditions under which a longwall is being operated. When linked to a description of geotechnical conditions, this has allowed the development of a database for support selection for greenfield sites. For existing sites, the load-cycle characteristic concept allows for a diagnosis of strata-support problem areas, enabling changes to be made to set pressure and mining strategies to manage better, or avoid, strata control problems. With further development of the software, there is the prospect of developing a system that is able to respond to changes in strata-support interaction in real time.
Resumo:
The biological reactions during the settling and decant periods of Sequencing Batch Reactors (SBRs) are generally ignored as they are not easily measured or described by modelling approaches. However, important processes are taking place, and in particular when the influent is fed into the bottom of the reactor at the same time (one of the main features of the UniFed process), the inclusion of these stages is crucial for accurate process predictions. Due to the vertical stratification of both liquid and solid components, a one-dimensional hydraulic model is combined with a modified ASM2d biological model to allow the prediction of settling velocity, sludge concentration, soluble components and biological processes during the non-mixed periods of the SBR. The model is calibrated on a full-scale UniFed SBR system with tracer breakthrough tests, depth profiles of particulate and soluble compounds and measurements of the key components during the mixed aerobic period. This model is then validated against results from an independent experimental period with considerably different operating parameters. In both cases, the model is able to accurately predict the stratification and most of the biological reactions occurring in the sludge blanket and the supernatant during the non-mixed periods. Together with a correct description of the mixed aerobic period, a good prediction of the overall SBR performance can be achieved.
Resumo:
Workflow systems have traditionally focused on the so-called production processes which are characterized by pre-definition, high volume, and repetitiveness. Recently, the deployment of workflow systems in non-traditional domains such as collaborative applications, e-learning and cross-organizational process integration, have put forth new requirements for flexible and dynamic specification. However, this flexibility cannot be offered at the expense of control, a critical requirement of business processes. In this paper, we will present a foundation set of constraints for flexible workflow specification. These constraints are intended to provide an appropriate balance between flexibility and control. The constraint specification framework is based on the concept of pockets of flexibility which allows ad hoc changes and/or building of workflows for highly flexible processes. Basically, our approach is to provide the ability to execute on the basis of a partially specified model, where the full specification of the model is made at runtime, and may be unique to each instance. The verification of dynamically built models is essential. Where as ensuring that the model conforms to specified constraints does not pose great difficulty, ensuring that the constraint set itself does not carry conflicts and redundancy is an interesting and challenging problem. In this paper, we will provide a discussion on both the static and dynamic verification aspects. We will also briefly present Chameleon, a prototype workflow engine that implements these concepts. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Dimensionless spray flux Ψa is a dimensionless group that characterises the three most important variables in liquid dispersion: flowrate, drop size and powder flux through the spray zone. In this paper, the Poisson distribution was used to generate analytical solutions for the proportion of nuclei formed from single drops (fsingle) and the fraction of the powder surface covered by drops (fcovered) as a function of Ψa. Monte-Carlo simulations were performed to simulate the spray zone and investigate how Ψa, fsingle and fcovered are related. The Monte-Carlo data was an excellent match with analytical solutions of fcovered and fsingle as a function of Ψa. At low Ψa, the proportion of the surface covered by drops (fcovered) was equal to Ψa. As Ψa increases, drop overlap becomes more dominant and the powder surface coverage levels off. The proportion of nuclei formed from single drops (fsingle) falls exponentially with increasing Ψa. In the ranges covered, these results were independent of drop size, number of drops, drop size distribution (mono-sized, bimodal and trimodal distributions), and the uniformity of the spray. Experimental data of nuclei size distributions as a function of spray flux were fitted to the analytical solution for fsingle by defining a cutsize for single drop nuclei. The fitted cutsizes followed the spray drop sizes suggesting that the method is robust and that the cutsize does indicate the transition size between single drop and agglomerate nuclei. This demonstrates that the nuclei distribution is determined by the dimensionless spray flux and the fraction of drop controlled nuclei can be calculated analytically in advance.
Resumo:
Background: Microarray transcript profiling has the potential to illuminate the molecular processes that are involved in the responses of cattle to disease challenges. This knowledge may allow the development of strategies that exploit these genes to enhance resistance to disease in an individual or animal population. Results: The Bovine Innate Immune Microarray developed in this study consists of 1480 characterised genes identified by literature searches, 31 positive and negative control elements and 5376 cDNAs derived from subtracted and normalised libraries. The cDNA libraries were produced from 'challenged' bovine epithelial and leukocyte cells. The microarray was found to have a limit of detection of 1 pg/mu g of total RNA and a mean slide-to-slide correlation co-efficient of 0.88. The profiles of differentially expressed genes from Concanavalin A ( ConA) stimulated bovine peripheral blood lymphocytes were determined. Three distinct profiles highlighted 19 genes that were rapidly up-regulated within 30 minutes and returned to basal levels by 24 h; 76 genes that were upregulated between 2 - 8 hours and sustained high levels of expression until 24 h and 10 genes that were down-regulated. Quantitative real-time RT-PCR on selected genes was used to confirm the results from the microarray analysis. The results indicate that there is a dynamic process involving gene activation and regulatory mechanisms re-establishing homeostasis in the ConA activated lymphocytes. The Bovine Innate Immune Microarray was also used to determine the cross-species hybridisation capabilities of an ovine PBL sample. Conclusion: The Bovine Innate Immune Microarray has been developed which contains a set of well-characterised genes and anonymous cDNAs from a number of different bovine cell types. The microarray can be used to determine the gene expression profiles underlying innate immune responses in cattle and sheep.
Resumo:
Accurate habitat mapping is critical to landscape ecological studies such as required for developing and testing Montreal Process indicator 1.1e, fragmentation of forest types. This task poses a major challenge to remote sensing, especially in mixedspecies, variable-age forests such as dry eucalypt forests of subtropical eastern Australia. In this paper, we apply an innovative approach that uses a small section of one-metre resolution airborne data to calibrate a moderate spatial resolution model (30 m resolution; scale 1:50 000) based on Landsat Thematic Mapper data to estimate canopy structural properties in St Marys State Forest, near Maryborough, south-eastern Queensland. The approach applies an image-processing model that assumes each image pixel is significantly larger than individual tree crowns and gaps to estimate crown-cover percentage, stem density and mean crown diameter. These parameters were classified into three discrete habitat classes to match the ecology of four exudivorous arboreal species (yellowbellied glider Petaurus australis, sugar glider P. breviceps, squirrel glider P. norfolcensis , and feathertail glider Acrobates pygmaeus), and one folivorous arboreal marsupial, the greater glider Petauroides volans. These species were targeted due to the known ecological preference for old trees with hollows, and differences in their home range requirements. The overall mapping accuracy, visually assessed against transects (n = 93) interpreted from a digital orthophoto and validated in the field, was 79% (KHAT statistic = 0.72). The KHAT statistic serves as an indicator of the extent that the percentage correct values of the error matrix are due to ‘true’ agreement verses ‘chance’ agreement. This means that we are able to reliably report on the effect of habitat loss on target species, especially those with a large home range size (e.g. yellow-bellied glider). However, the classified habitat map failed to accurately capture the spatial patterning (e.g. patch size and shape) of stands with a trace or sub-dominance of senescent trees. This outcome makes the reporting of the effects of habitat fragmentation more problematic, especially for species with a small home range size (e.g. feathertail glider). With further model refinement and validation, however, this moderateresolution approach offers an important, cost eff e c t i v e advancement in mapping the age of dry eucalypt forests in the region.
Resumo:
A steady state mathematical model for co-current spray drying was developed for sugar-rich foods with the application of the glass transition temperature concept. Maltodextrin-sucrose solution was used as a sugar-rich food model. The model included mass, heat and momentum balances for a single droplet drying as well as temperature and humidity profile of the drying medium. A log-normal volume distribution of the droplets was generated at the exit of the rotary atomizer. This generation created a certain number of bins to form a system of non-linear first-order differential equations as a function of the axial distance of the drying chamber. The model was used to calculate the changes of droplet diameter, density, temperature, moisture content and velocity in association with the change of air properties along the axial distance. The difference between the outlet air temperature and the glass transition temperature of the final products (AT) was considered as an indicator of stickiness of the particles in spray drying process. The calculated and experimental AT values were close, indicating successful validation of the model. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The critical process parameter for mineral separation is the degree of mineral liberation achieved by comminution. The degree of liberation provides an upper limit of efficiency for any physical separation process. The standard approach to measuring mineral liberation uses mineralogical analysis based two-dimensional sections of particles which may be acquired using a scanning electron microscope and back-scatter electron analysis or from an analysis of an image acquired using an optical microscope. Over the last 100 years, mathematical techniques have been developed to use this two dimensional information to infer three-dimensional information about the particles. For mineral processing, a particle that contains more than one mineral (a composite particle) may appear to be liberated (contain only one mineral) when analysed using only its revealed particle section. The mathematical techniques used to interpret three-dimensional information belong, to a branch of mathematics called stereology. However methods to obtain the full mineral liberation distribution of particles from particle sections are relatively new. To verify these adjustment methods, we require an experimental method which can accurately measure both sectional and three dimensional properties. Micro Cone Beam Tomography provides such a method for suitable particles and hence, provides a way to validate methods used to convert two-dimensional measurements to three dimensional estimates. For this study ore particles from a well-characterised sample were subjected to conventional mineralogical analysis (using particle sections) to estimate three-dimensional properties of the particles. A subset of these particles was analysed using a micro-cone beam tomograph. This paper presents a comparison of the three-dimensional properties predicted from measured two-dimensional sections with the measured three-dimensional properties.
Resumo:
Workflow technology has delivered effectively for a large class of business processes, providing the requisite control and monitoring functions. At the same time, this technology has been the target of much criticism due to its limited ability to cope with dynamically changing business conditions which require business processes to be adapted frequently, and/or its limited ability to model business processes which cannot be entirely predefined. Requirements indicate the need for generic solutions where a balance between process control and flexibility may be achieved. In this paper we present a framework that allows the workflow to execute on the basis of a partially specified model where the full specification of the model is made at runtime, and may be unique to each instance. This framework is based on the notion of process constraints. Where as process constraints may be specified for any aspect of the workflow, such as structural, temporal, etc. our focus in this paper is on a constraint which allows dynamic selection of activities for inclusion in a given instance. We call these cardinality constraints, and this paper will discuss their specification and validation requirements.
Resumo:
A complete workflow specification requires careful integration of many different process characteristics. Decisions must be made as to the definitions of individual activities, their scope, the order of execution that maintains the overall business process logic, the rules governing the discipline of work list scheduling to performers, identification of time constraints and more. The goal of this paper is to address an important issue in workflows modelling and specification, which is data flow, its modelling, specification and validation. Researchers have neglected this dimension of process analysis for some time, mainly focussing on structural considerations with limited verification checks. In this paper, we identify and justify the importance of data modelling in overall workflows specification and verification. We illustrate and define several potential data flow problems that, if not detected prior to workflow deployment may prevent the process from correct execution, execute process on inconsistent data or even lead to process suspension. A discussion on essential requirements of the workflow data model in order to support data validation is also given..
Resumo:
Web wrapper extracts data from HTML document. The accuracy and quality of the information extracted by web wrapper relies on the structure of the HTML document. If an HTML document is changed, the web wrapper may or may not function correctly. This paper presents an Adjacency-Weight method to be used in the web wrapper extraction process or in a wrapper self-maintenance mechanism to validate web wrappers. The algorithm and data structures are illustrated by some intuitive examples.
Resumo:
The results presented in this report form a part of a larger global study on the major issues in BPM. Only one part of the larger study is reported here, viz. interviews with BPM experts. Interviews of BPM tool vendors together with focus groups involving user organizations, are continuing in parallel and will set the groundwork for the identification of BPM issues on a global scale via a survey (including a Delphi study). Through this multi-method approach, we identify four distinct sets of outcomes. First, as is the focus of this report, we identify the BPM issues as perceived by BPM experts. Second, the research design allows us to gain insight into the opinions of organisations deploying BPM solutions. Third, an understanding of organizations’ misconceptions of BPM technologies, as confronted by BPM tool vendors is obtained. Last, we seek to gain an understanding of BPM issues on a global scale, together with knowledge of matters of concern. This final outcome is aimed to produce an industry driven research agenda which will inform practitioners and in particular, the research community world-wide on issues and challenges that are prevalent or emerging in BPM and related areas.