954 resultados para Process capability analysis
Resumo:
Porphyromonas gingivalis is a key periodontal pathogen which has been implicated in the etiology of chronic adult periodontitis. Our aim was to develop a protein based vaccine for the prevention and or treatment of this disease. We used a whole genome sequencing approach to identify potential vaccine candidates. From a genomic sequence, we selected 120 genes using a series of bioinformatics methods. The selected genes were cloned for expression in Escherichia coli and screened with P. gingivalis antisera before purification and testing in an animal model. Two of these recombinant proteins (PG32 and PG33) demonstrated significant protection in the animal model, while a number were reactive with various antisera. This process allows the rapid identification of vaccine candidates from genomic data. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The development of the new TOGA (titration and off-gas analysis) sensor for the detailed study of biological processes in wastewater treatment systems is outlined. The main innovation of the sensor is the amalgamation of titrimetric and off-gas measurement techniques. The resulting measured signals are: hydrogen ion production rate (HPR), oxygen transfer rate (OTR), nitrogen transfer rate (NTR), and carbon dioxide transfer rate (CTR). While OTR and NTR are applicable to aerobic and anoxic conditions, respectively, HPR and CTR are useful signals under all of the conditions found in biological wastewater treatment systems, namely, aerobic, anoxic and anaerobic. The sensor is therefore a powerful tool for studying the key biological processes under all these conditions. A major benefit from the integration of the titrimetric and off-gas analysis methods is that the acid/base buffering systems, in particular the bicarbonate system, are properly accounted for. Experimental data resulting from the TOGA sensor in aerobic, anoxic, and anaerobic conditions demonstrates the strength of the new sensor. In the aerobic environment, carbon oxidation (using acetate as an example carbon source) and nitrification are studied. Both the carbon and ammonia removal rates measured by the sensor compare very well with those obtained from off-line chemical analysis. Further, the aerobic acetate removal process is examined at a fundamental level using the metabolic pathway and stoichiometry established in the literature, whereby the rate of formation of storage products is identified. Under anoxic conditions, the denitrification process is monitored and, again, the measured rate of nitrogen gas transfer (NTR) matches well with the removal of the oxidised nitrogen compounds (measured chemically). In the anaerobic environment, the enhanced biological phosphorus process was investigated. In this case, the measured sensor signals (HPR and CTR) resulting from acetate uptake were used to determine the ratio of the rates of carbon dioxide production by competing groups of microorganisms, which consequently is a measure of the activity of these organisms. The sensor involves the use of expensive equipment such as a mass spectrometer and requires special gases to operate, thus incurring significant capital and operational costs. This makes the sensor more an advanced laboratory tool than an on-line sensor. (C) 2003 Wiley Periodicals, Inc.
Resumo:
Fault detection and isolation (FDI) are important steps in the monitoring and supervision of industrial processes. Biological wastewater treatment (WWT) plants are difficult to model, and hence to monitor, because of the complexity of the biological reactions and because plant influent and disturbances are highly variable and/or unmeasured. Multivariate statistical models have been developed for a wide variety of situations over the past few decades, proving successful in many applications. In this paper we develop a new monitoring algorithm based on Principal Components Analysis (PCA). It can be seen equivalently as making Multiscale PCA (MSPCA) adaptive, or as a multiscale decomposition of adaptive PCA. Adaptive Multiscale PCA (AdMSPCA) exploits the changing multivariate relationships between variables at different time-scales. Adaptation of scale PCA models over time permits them to follow the evolution of the process, inputs or disturbances. Performance of AdMSPCA and adaptive PCA on a real WWT data set is compared and contrasted. The most significant difference observed was the ability of AdMSPCA to adapt to a much wider range of changes. This was mainly due to the flexibility afforded by allowing each scale model to adapt whenever it did not signal an abnormal event at that scale. Relative detection speeds were examined only summarily, but seemed to depend on the characteristics of the faults/disturbances. The results of the algorithms were similar for sudden changes, but AdMSPCA appeared more sensitive to slower changes.
Resumo:
The effect of gamma-radiation on a perfluoroalkoxy (PFA) resin was examined using solid-state high-speed magic angle spinning (MAS) F-19 NMR spectroscopy. Samples were prepared for analysis by subjecting them to gamma-radiation in the dose range 0.5-3 MGy at either 303, 473, or 573 K. New structures identified include new saturated chain ends, short and long branches, and unsaturated groups. The formation of branched structures was found to increase with increasing irradiation temperature; however, at all temperatures the radiation chemical yield (G value) of new chain ends was greater than the G value of long branch points, suggesting that chain scission is the net process.
Resumo:
In this paper we use sensor-annotated abstraction hierarchies (Reising & Sanderson, 1996, 2002a,b) to show that unless appropriately instrumented, configural displays designed according to the principles of ecological interface design (EID) might be vulnerable to misinterpretation when sensors become unreliable or are unavailable. Building on foundations established in Reising and Sanderson (2002a) we use a pasteurization process control example to show how sensor-annotated AHs help the analyst determine the impact of different instrumentation engineering policies on a configural display that is part of an ecological interface. Our analyses suggest that configural displays showing higher-order properties of a system are especially vulnerable under some conservative instrumentation configurations. However, sensor-annotated AHs can be used to indicate where corrective instrumentation might be placed. We argue that if EID is to be effectively employed in the design of displays for complex systems, then the information needs of the human operator need to be considered while instrumentation requirements are being formulated. Rasmussen's abstraction hierarchy-and particularly its extension to the analysis of information captured by sensors and derived from sensors-may therefore be a useful adjunct to up-stream instrumentation design. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Rapid accumulation of few polyhedra (FP) mutants was detected during serial passaging of Helicoverpa armigera nucleopolyhedrovirus (HaSNPV) in cell culture. 100% FP infected cells were observed by passage 6. The specific yield decreased from 178 polyhedra per cell at passage 2 to two polyhedra per cell at passage 6. The polyhedra at passage 6 were not biologically active, with a 28-fold reduction in potency compared to passage 3. Electron microscopy studies revealed that very few polyhedra were produced in an FP infected cell (< 10 polyhedra per section) and in most cases these polyhedra contained no virions. A specific failure in the intranuclear nucleocapsid envelopment process in the FP infected cells, leading to the accumulation of naked nucleocapsids, was observed. Genomic restriction endonuclease digestion profiles of budded virus DNA from all passages did not indicate any large DNA insertions or deletions that are often associated with such FP phenotypes for the extensively studied Autographa californica nucleopolyhedrovirus and Gaileria mellonella nucleopolyhedrovirus. Within an HaSNPV 25K FP gene homologue, a single base-pair insertion (an adenine residue) within a region of repetitive sequences (seven adenine residues) was identified in one plaque-purified HaSNPV FP mutant. Furthermore, the sequences obtained from individual clones of the 25KFP gene PCR products of a late passage revealed point mutations or single base-pair insertions occurring throughout the gene. The mechanism of FP mutation in HaSNPV is likely similar to that seen for Lymantria dispar nucleopolyhedrovirus, involving point mutations or small insertions/deletions of the 25K FP gene.
Resumo:
It has been argued that a firm's capacity to learn from its market is a source of both innovation and competitive advantage. However, past research has failed to conceptualize market-focused learning activity as a capability having the potential to contribute to competitive advantage. Prior innovation research has been biased toward technological innovation. However, there is evidence to suggest that both technological and non-technological innovations contribute to competitive advantage reflecting the need for a broader conceptualization of the innovation construct. Past research has also overlooked the critical role of entrepreneurship in the capability building process. Competitive advantage has been predominantly measured in terms of financial indicators of performance. In general, the literature reflects the need for comprehensive measures of organizational innovation and competitive advantage. This paper examines the role of market-focused learning capability in organizational innovation-based competitive strategy. The paper contributes to the strategic marketing theory by developing and refining measures of entrepreneurship, market-focused learning capability, organizational innovation and sustained competitive advantage, testing relationships among these constructs.
Resumo:
The two steps of nitrification, namely the oxidation of ammonia to nitrite and nitrite to nitrate, often need to be considered separately in process studies. For a detailed examination, it is desirable to monitor the two-step sequence using online measurements. In this paper, the use of online titrimetric and off-gas analysis (TOGA) methods for the examination of the process is presented. Using the known reaction stoichiometry, combination of the measured signals (rates of hydrogen ion production, oxygen uptake and carbon dioxide transfer) allows the determination of the three key process rates, namely the ammonia consumption rate, the nitrite accumulation rate and the nitrate production rate. Individual reaction rates determined with the TOGA sensor under a number of operation conditions are presented. The rates calculated directly from the measured signals are compared with those obtained from offline liquid sample analysis. Statistical analysis confirms that the results from the two approaches match well. This result could not have been guaranteed using alternative online methods. As a case study, the influences of pH and dissolved oxygen (DO) on nitrite accumulation are tested using the proposed method. It is shown that nitrite accumulation decreased with increasing DO and pH. Possible reasons for these observations are discussed. (C) 2003 Elsevier Science Ltd. All rights reserved.
Nitrification of high strength ammonia wastewtaer treatment - process selection is the major factor.
Resumo:
Biological nitrogen removal via the nitrite pathway in wastewater treatment is very important in Saving the cost of aeration and as an electron donor for denitrification. Wastewater nitrification and nitrite accumulation were carried out in a biofilm airlift reactor with autotrophic nitrifying biofilm. The biofilm reactor showed almost complete nitrification and most of the oxidized ammonium was present as nitrite at the ammonium load of 1.5 to 3.5 kg N/m3.d. Nitrite accumulation was stably achieved by the selective inhibition of nitrite oxidizers with free ammonia and dissolved oxygen limitation. Stable 100% conversion to nitrite could also be achieved even under the absence of free ammonia inhibition on nitrite oxidizers. Batch ammonium oxidation and nitrite oxidation with nitrite accumulating nitrifying biofilm showed that nitrite Oxidation was completely inhibited when free ammonia is higher than 0.2 mg N/L. However, nitrite oxidation activity was recovered as soon as the free ammonia concentration was below the threshold level when dissolved oxygen concentration was not the limiting factor. Fluorescence in situ hybridization analysis of cryosectioned nitrite accumulating nitrifying biofilm showed that the β-subclass of Proteobacteria, where ammonia oxidizers belong, was distributed outside the biofilm whereas the α-subclass of Proteobacteria, where nitrite oxidizers belong, was found mainly in the inner part of the biofilm. It is likely that dissolved oxygen deficiency or limitation in the inner part of the nitrifying biofilm, where nitrite oxidizers exist, is responsible for the complete shut down of the nitrite oxidizers activity under the absence of free ammonia inhibition.
Resumo:
The use of a fitted parameter watershed model to address water quantity and quality management issues requires that it be calibrated under a wide range of hydrologic conditions. However, rarely does model calibration result in a unique parameter set. Parameter nonuniqueness can lead to predictive nonuniqueness. The extent of model predictive uncertainty should be investigated if management decisions are to be based on model projections. Using models built for four neighboring watersheds in the Neuse River Basin of North Carolina, the application of the automated parameter optimization software PEST in conjunction with the Hydrologic Simulation Program Fortran (HSPF) is demonstrated. Parameter nonuniqueness is illustrated, and a method is presented for calculating many different sets of parameters, all of which acceptably calibrate a watershed model. A regularization methodology is discussed in which models for similar watersheds can be calibrated simultaneously. Using this method, parameter differences between watershed models can be minimized while maintaining fit between model outputs and field observations. In recognition of the fact that parameter nonuniqueness and predictive uncertainty are inherent to the modeling process, PEST's nonlinear predictive analysis functionality is then used to explore the extent of model predictive uncertainty.
Resumo:
Within the development of motor vehicles, crash safety (e.g. occupant protection, pedestrian protection, low speed damageability), is one of the most important attributes. In order to be able to fulfill the increased requirements in the framework of shorter cycle times and rising pressure to reduce costs, car manufacturers keep intensifying the use of virtual development tools such as those in the domain of Computer Aided Engineering (CAE). For crash simulations, the explicit finite element method (FEM) is applied. The accuracy of the simulation process is highly dependent on the accuracy of the simulation model, including the midplane mesh. One of the roughest approximations typically made is the actual part thickness which, in reality, can vary locally. However, almost always a constant thickness value is defined throughout the entire part due to complexity reasons. On the other hand, for precise fracture analysis within FEM, the correct thickness consideration is one key enabler. Thus, availability of per element thickness information, which does not exist explicitly in the FEM model, can significantly contribute to an improved crash simulation quality, especially regarding fracture prediction. Even though the thickness is not explicitly available from the FEM model, it can be inferred from the original CAD geometric model through geometric calculations. This paper proposes and compares two thickness estimation algorithms based on ray tracing and nearest neighbour 3D range searches. A systematic quantitative analysis of the accuracy of both algorithms is presented, as well as a thorough identification of particular geometric arrangements under which their accuracy can be compared. These results enable the identification of each technique’s weaknesses and hint towards a new, integrated, approach to the problem that linearly combines the estimates produced by each algorithm.
Resumo:
Among the proposed treatments to repair lesions of degenerative joint disease (DJD), chondroprotective nutraceuticals composed by glucosamine and chondroitin sulfate are a non-invasive theraphy with properties that favors the health of the cartilage. Although used in human, it is also available for veterinary use with administration in the form of nutritional supplement independent of prescription, since they have registry only in the Inspection Service, which does not require safety and efficacy testing. The lack of such tests to prove efficacy and safety of veterinary medicines required by the Ministry of Agriculture and the lack of scientific studies proving its benefits raises doubts about the efficiency of the concentrations of such active substances. In this context, the objective of this study was to evaluate the efficacy of a veterinary chondroprotective nutraceutical based on chondroitin sulfate and glucosamine in the repair of osteochondral defects in lateral femoral condyle of 48 dogs, through clinical and radiographic analysis. The animals were divided into treatment group (TG) and control group (CG), so that only the TG received the nutraceutical every 24 hours at the rate recommended by the manufacturer. The results of the four treatment times (15, 30, 60 and 90 days) showed that the chondroprotective nutraceutical, in the rate, formulation and administration at the times used, did not improve clinical signs and radiologically did not influence in the repair process of the defects, since the treated and control groups showed similar radiographic findings at the end of the treatments.
Resumo:
Current software development often relies on non-trivial coordination logic for combining autonomous services, eventually running on different platforms. As a rule, however, such a coordination layer is strongly woven within the application at source code level. Therefore, its precise identification becomes a major methodological (and technical) problem and a challenge to any program understanding or refactoring process. The approach introduced in this paper resorts to slicing techniques to extract coordination data from source code. Such data are captured in a specific dependency graph structure from which a coordination model can be recovered either in the form of an Orc specification or as a collection of code fragments corresponding to the identification of typical coordination patterns in the system. Tool support is also discussed
Resumo:
What sort of component coordination strategies emerge in a software integration process? How can such strategies be discovered and further analysed? How close are they to the coordination component of the envisaged architectural model which was supposed to guide the integration process? This paper introduces a framework in which such questions can be discussed and illustrates its use by describing part of a real case-study. The approach is based on a methodology which enables semi-automatic discovery of coordination patterns from source code, combining generalized slicing techniques and graph manipulation