973 resultados para SOFTWARE-RELIABILITY MODELS
Resumo:
The QU-GENE Computing Cluster (QCC) is a hardware and software solution to the automation and speedup of large QU-GENE (QUantitative GENEtics) simulation experiments that are designed to examine the properties of genetic models, particularly those that involve factorial combinations of treatment levels. QCC automates the management of the distribution of components of the simulation experiments among the networked single-processor computers to achieve the speedup.
Resumo:
Despite their limitations, linear filter models continue to be used to simulate the receptive field properties of cortical simple cells. For theoreticians interested in large scale models of visual cortex, a family of self-similar filters represents a convenient way in which to characterise simple cells in one basic model. This paper reviews research on the suitability of such models, and goes on to advance biologically motivated reasons for adopting a particular group of models in preference to all others. In particular, the paper describes why the Gabor model, so often used in network simulations, should be dropped in favour of a Cauchy model, both on the grounds of frequency response and mutual filter orthogonality.
Resumo:
Hereditary nonpolyposis colorectal cancer syndrome (HNPCC) is an autosomal dominant condition accounting for 2–5% of all colorectal carcinomas as well as a small subset of endometrial, upper urinary tract and other gastrointestinal cancers. An assay to detect the underlying defect in HNPCC, inactivation of a DNA mismatch repair enzyme, would be useful in identifying HNPCC probands. Monoclonal antibodies against hMLH1 and hMSH2, two DNA mismatch repair proteins which account for most HNPCC cancers, are commercially available. This study sought to investigate the potential utility of these antibodies in determining the expression status of these proteins in paraffin-embedded formalin-fixed tissue and to identify key technical protocol components associated with successful staining. A set of 20 colorectal carcinoma cases of known hMLH1 and hMSH2 mutation and expression status underwent immunoperoxidase staining at multiple institutions, each of which used their own technical protocol. Staining for hMSH2 was successful in most laboratories while staining for hMLH1 proved problematic in multiple labs. However, a significant minority of laboratories demonstrated excellent results including high discriminatory power with both monoclonal antibodies. These laboratories appropriately identified hMLH1 or hMSH2 inactivation with high sensitivity and specificity. The key protocol point associated with successful staining was an antigen retrieval step involving heat treatment and either EDTA or citrate buffer. This study demonstrates the potential utility of immunohistochemistry in detecting HNPCC probands and identifies key technical components for successful staining.
Resumo:
The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Five kinetic models for adsorption of hydrocarbons on activated carbon are compared and investigated in this study. These models assume different mass transfer mechanisms within the porous carbon particle. They are: (a) dual pore and surface diffusion (MSD), (b) macropore, surface, and micropore diffusion (MSMD), (c) macropore, surface and finite mass exchange (FK), (d) finite mass exchange (LK), and (e) macropore, micropore diffusion (BM) models. These models are discriminated using the single component kinetic data of ethane and propane as well as the multicomponent kinetics data of their binary mixtures measured on two commercial activated carbon samples (Ajax and Norit) under various conditions. The adsorption energetic heterogeneity is considered for all models to account for the system. It is found that, in general, the models assuming diffusion flux of adsorbed phase along the particle scale give better description of the kinetic data.
Resumo:
Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.
Resumo:
ISCOMs(R) are typically 40 nm cage-like structures comprising antigen, saponin, cholesterol and phospholipid. ISCOMs(R) have been shown to induce antibody responses and activate T helper cells and cyrolytic T lymphocytes in a number of animal species, including non-human primates. Recent clinical studies have demonstrated that ISCOMs(R) are also able to induce antibody and cellular immune responses in humans. This review describes the current understanding of the ability of ISCOMs(R) to induce immune responses and the mechanisms underlying this property. Recent progress in the characterisation and manufacture of ISCOMs(R) will also be discussed. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Blood-feeding parasites, including schistosomes, hookworms, and malaria parasites, employ aspartic proteases to make initial or early cleavages in ingested host hemoglobin. To better understand the substrate affinity of these aspartic proteases, sequences were aligned with and/or three-dimensional, molecular models were constructed of the cathepsin D-like aspartic proteases of schistosomes and hookworms and of plasmepsins of Plasmodium falciparum and Plasmodium vivax, using the structure of human cathepsin D bound to the inhibitor pepstatin as the template. The catalytic subsites S5 through S4' were determined for the modeled parasite proteases. Subsequently, the crystal structure of mouse renin complexed with the nonapeptidyl inhibitor t-butyl-CO-His-Pro-Phe-His-Leu [CHOHCH2]Leu-Tyr-Tyr-Ser-NH2 (CH-66) was used to build homology models of the hemoglobin-degrading peptidases docked with a series of octapeptide substrates. The modeled octapeptides included representative sites in hemoglobin known to be cleaved by both Schistosoma japonicum cathepsin D and human cathepsin D, as well as sites cleaved by one but not the other of these enzymes. The peptidase-octapeptide substrate models revealed that differences in cleavage sites were generally attributable to the influence of a single amino acid change among the P5 to P4' residues that would either enhance or diminish the enzymatic affinity. The difference in cleavage sites appeared to be more profound than might be expected from sequence differences in the enzymes and hemoglobins. The findings support the notion that selective inhibitors of the hemoglobin-degrading peptidases of blood-feeding parasites at large could be developed as novel anti-parasitic agents.
Resumo:
The development of cropping systems simulation capabilities world-wide combined with easy access to powerful computing has resulted in a plethora of agricultural models and consequently, model applications. Nonetheless, the scientific credibility of such applications and their relevance to farming practice is still being questioned. Our objective in this paper is to highlight some of the model applications from which benefits for farmers were or could be obtained via changed agricultural practice or policy. Changed on-farm practice due to the direct contribution of modelling, while keenly sought after, may in some cases be less achievable than a contribution via agricultural policies. This paper is intended to give some guidance for future model applications. It is not a comprehensive review of model applications, nor is it intended to discuss modelling in the context of social science or extension policy. Rather, we take snapshots around the globe to 'take stock' and to demonstrate that well-defined financial and environmental benefits can be obtained on-farm from the use of models. We highlight the importance of 'relevance' and hence the importance of true partnerships between all stakeholders (farmer, scientists, advisers) for the successful development and adoption of simulation approaches. Specifically, we address some key points that are essential for successful model applications such as: (1) issues to be addressed must be neither trivial nor obvious; (2) a modelling approach must reduce complexity rather than proliferate choices in order to aid the decision-making process (3) the cropping systems must be sufficiently flexible to allow management interventions based on insights gained from models. The pro and cons of normative approaches (e.g. decision support software that can reach a wide audience quickly but are often poorly contextualized for any individual client) versus model applications within the context of an individual client's situation will also be discussed. We suggest that a tandem approach is necessary whereby the latter is used in the early stages of model application for confidence building amongst client groups. This paper focuses on five specific regions that differ fundamentally in terms of environment and socio-economic structure and hence in their requirements for successful model applications. Specifically, we will give examples from Australia and South America (high climatic variability, large areas, low input, technologically advanced); Africa (high climatic variability, small areas, low input, subsistence agriculture); India (high climatic variability, small areas, medium level inputs, technologically progressing; and Europe (relatively low climatic variability, small areas, high input, technologically advanced). The contrast between Australia and Europe will further demonstrate how successful model applications are strongly influenced by the policy framework within which producers operate. We suggest that this might eventually lead to better adoption of fully integrated systems approaches and result in the development of resilient farming systems that are in tune with current climatic conditions and are adaptable to biophysical and socioeconomic variability and change. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.
Resumo:
Forecasting category or industry sales is a vital component of a company's planning and control activities. Sales for most mature durable product categories are dominated by replacement purchases. Previous sales models which explicitly incorporate a component of sales due to replacement assume there is an age distribution for replacements of existing units which remains constant over time. However, there is evidence that changes in factors such as product reliability/durability, price, repair costs, scrapping values, styling and economic conditions will result in changes in the mean replacement age of units. This paper develops a model for such time-varying replacement behaviour and empirically tests it in the Australian automotive industry. Both longitudinal census data and the empirical analysis of the replacement sales model confirm that there has been a substantial increase in the average aggregate replacement age for motor vehicles over the past 20 years. Further, much of this variation could be explained by real price increases and a linear temporal trend. Consequently, the time-varying model significantly outperformed previous models both in terms of fitting and forecasting the sales data. Copyright (C) 2001 John Wiley & Sons, Ltd.