948 resultados para Process simulation
Resumo:
The masseter and temporal muscles of patients with maxillary and mandibular osteoporosis were submitted to electromyographic analysis and compared with a control group. In conclusion, individuals with osteoporosis did not show significantly lower masticatory cycle performance and efficiency compared to the control group during the proposal mastications. This study aimed to examine electromyographically the masseter and temporal muscles of patients with maxillary and mandibular osteoporosis and compare these patients with control patients. Sixty individuals of both genders with an average age of 53.0 +/- 5 years took part in the study, distributed in two groups with 30 individuals each: (1) individuals with osteoporosis; (2) control patients during the habitual and non-habitual mastication. The electromyographic apparel used was a Myosystem-BR1-DataHomins Technology Ltda., with five channels of acquisition and electrodes active differentials. Statistical analysis of the results was performed using SPSS version 15.0 (Chicago, IL, USA). The result of the Student`s t test indicated no significant differences (p > 0.05) between the normalized values of the ensemble average obtained in masticatory cycles in both groups. Based on the results of this study, it was concluded that individuals with osteoporosis did not show significantly lower masticatory cycle performance and efficiency compared to control subjects during the habitual and non-habitual mastications. This result is very important because it demonstrates the functionality of the complex physiological process of mastication in individuals with osteoporosis at the bones that compose the face.
Resumo:
This work studied the structure-hepatic disposition relationships for cationic drugs of varying lipophilicity using a single-pass, in situ rat liver preparation. The lipophilicity among the cationic drugs studied in this work is in the following order: diltiazem. propranolol. labetalol. prazosin. antipyrine. atenolol. Parameters characterizing the hepatic distribution and elimination kinetics of the drugs were estimated using the multiple indicator dilution method. The kinetic model used to describe drug transport (the two-phase stochastic model) integrated cytoplasmic binding kinetics and belongs to the class of barrier-limited and space-distributed liver models. Hepatic extraction ratio (E) (0.30-0.92) increased with lipophilicity. The intracellular binding rate constant (k(on)) and the equilibrium amount ratios characterizing the slowly and rapidly equilibrating binding sites (K-S and K-R) increase with the lipophilicity of drug (k(on) : 0.05-0.35 s(-1); K-S : 0.61-16.67; K-R : 0.36-0.95), whereas the intracellular unbinding rate constant (k(off)) decreases with the lipophilicity of drug (0.081-0.021 s(-1)). The partition ratio of influx (k(in)) and efflux rate constant (k(out)), k(in)/k(out), increases with increasing pK(a) value of the drug [from 1.72 for antipyrine (pK(a) = 1.45) to 9.76 for propranolol (pK(a) = 9.45)], the differences in k(in/kout) for the different drugs mainly arising from ion trapping in the mitochondria and lysosomes. The value of intrinsic elimination clearance (CLint), permeation clearance (CLpT), and permeability-surface area product (PS) all increase with the lipophilicity of drug [CLint (ml . min(-1) . g(-1) of liver): 10.08-67.41; CLpT (ml . min(-1) . g(-1) of liver): 10.80-5.35; PS (ml . min(-1) . g(-1) of liver): 14.59-90.54]. It is concluded that cationic drug kinetics in the liver can be modeled using models that integrate the presence of cytoplasmic binding, a hepatocyte barrier, and a vascular transit density function.
Resumo:
In this work, we present a systematic approach to the representation of modelling assumptions. Modelling assumptions form the fundamental basis for the mathematical description of a process system. These assumptions can be translated into either additional mathematical relationships or constraints between model variables, equations, balance volumes or parameters. In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The smallest indivisible syntactical element, the so called assumption atom has been identified as a triplet. With this syntax a modelling assumption can be described as an elementary assumption, i.e. an assumption consisting of only an assumption atom or a composite assumption consisting of a conjunction of elementary assumptions. The above syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and necessary conditions for checking them are given. These transformations can be used in several ways and their implications can be analysed by formal methods. The modelling assumptions define model hierarchies. That is, a series of model families each belonging to a particular equivalence class. These model equivalence classes can be related to primal assumptions regarding the definition of mass, energy and momentum balance volumes and to secondary and tiertinary assumptions regarding the presence or absence and the form of mechanisms within the system. Within equivalence classes, there are many model members, these being related to algebraic model transformations for the particular model. We show how these model hierarchies are driven by the underlying assumption structure and indicate some implications on system dynamics and complexity issues. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Geographical information systems (GIS) coupled to 3D visualisation technology is an emerging tool for urban planning and landscape design applications. The utility of 3D GIS for realistically visualising the built environment and proposed development scenarios is much advocated in the literature. Planners assess the merits of proposed changes using visual impact assessment (VIA). We have used Arcview GIS and visualisation software: called PolyTRIM from the University of Toronto, Centre for Landscape Research (CLR) to create a 3D scene for the entrance to a University campus. The paper investigates the thesis that to facilitate VIA in planning and design requires not only visualisation, but also a structured evaluation technique (Delphi) to arbitrate the decision-making process. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Ten years ago, an anaerobic ammonium oxidation ('anammox') process was discovered in a denitrifying pilot plant reactor. From this system, a highly enriched microbial community was obtained, dominated by a single deep-branching planctomycete, Candidatus Brocadia anammoxidans. Phylogenetic inventories of different wastewater treatment plants with anammox activity have suggested that at least two genera in Planctomycetales can catalyse the anammox process. Electron microscopy of the ultrastructure of B. anammoxidans has shown that several membrane-bounded compartments are present inside the cytoplasm. Hydroxylamine oxidoreductase, a key anammox enzyme, is found exclusively inside one of these compartments, tentatively named the 'anammoxosome'.
Resumo:
Modelling and simulation studies were carried out at 26 cement clinker grinding circuits including tube mills, air separators and high pressure grinding rolls in 8 plants. The results reported earlier have shown that tube mills can be modelled as several mills in series, and the internal partition in tube mills can be modelled as a screen which must retain coarse particles in the first compartment but not impede the flow of drying air. In this work the modelling has been extended to show that the Tromp curve which describes separator (classifier) performance can be modelled in terms of d(50)(corr), by-pass, the fish hook, and the sharpness of the curve. Also the high pressure grinding rolls model developed at the Julius Kruttschnitt Mineral Research Centre gives satisfactory predictions using a breakage function derived from impact and compressed bed tests. Simulation studies of a full plant incorporating a tube mill, HPGR and separators showed that the models could successfully predict the performance of the another mill working under different conditions. The simulation capability can therefore be used for process optimization and design. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The impact of fluorine in copper flotation was relatively unknown until the introduction of skarn ores in the Ok Tedi concentrator. Fluorine in the copper concentrates reports to the gas phase during the smelting stage and forms a corrosive H2SO4-HCl-HF acid brine mixture which must be neutralised. This work was aimed at studying the mineralogy of the fluorosilicate minerals contained in the various oretypes present in the Ok Tedi porphyry copper deposit. The electron microprobe was used to analyse for fluorine and hence identify the fluorosilicate minerals in each oretype. This study revealed talc, phlogopite, biotite, clays, amphiboles, fluoroapatite and titanite to be the sources of fluorine in the orebody. Laboratory and plant investigations were conducted to study the flotation response of these minerals. Chemical assaying of the products of these tests was done to determine the bulk assay of fluorine, Using Rietveld analysis, quantitative estimates of the fluorosilicate minerals in these products were generated. Marrying of the bulk assay with the respective mineralogical assay enabled the understanding of the flotation behavior of fluorine and it's associated mineralogy. Talc and phlogopite were found to be the causes of the fluorine problem at Ok Tedi. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The step size determines the accuracy of a discrete element simulation. The position and velocity updating calculation uses a pre-calculated table and hence the control of step size can not use the integration formulas for step size control. A step size control scheme for use with the table driven velocity and position calculation uses the difference between the calculation result from one big step and that from two small steps. This variable time step size method chooses the suitable time step size for each particle at each step automatically according to the conditions. Simulation using fixed time step method is compared with that of using variable time step method. The difference in computation time for the same accuracy using a variable step size (compared to the fixed step) depends on the particular problem. For a simple test case the times are roughly similar. However, the variable step size gives the required accuracy on the first run. A fixed step size may require several runs to check the simulation accuracy or a conservative step size that results in longer run times. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Recent reviews of the desistance literature have advocated studying desistance as a process, yet current empirical methods continue to measure desistance as a discrete state. In this paper, we propose a framework for empirical research that recognizes desistance as a developmental process. This approach focuses on changes in the offending rare rather than on offending itself We describe a statistical model to implement this approach and provide an empirical example. We conclude with several suggestions for future research endeavors that arise from our conceptualization of desistance.
Resumo:
Activated sludge models are used extensively in the study of wastewater treatment processes. While various commercial implementations of these models are available, there are many people who need to code models themselves using the simulation packages available to them, Quality assurance of such models is difficult. While benchmarking problems have been developed and are available, the comparison of simulation data with that of commercial models leads only to the detection, not the isolation of errors. To identify the errors in the code is time-consuming. In this paper, we address the problem by developing a systematic and largely automated approach to the isolation of coding errors. There are three steps: firstly, possible errors are classified according to their place in the model structure and a feature matrix is established for each class of errors. Secondly, an observer is designed to generate residuals, such that each class of errors imposes a subspace, spanned by its feature matrix, on the residuals. Finally. localising the residuals in a subspace isolates coding errors. The algorithm proved capable of rapidly and reliably isolating a variety of single and simultaneous errors in a case study using the ASM 1 activated sludge model. In this paper a newly coded model was verified against a known implementation. The method is also applicable to simultaneous verification of any two independent implementations, hence is useful in commercial model development.
Resumo:
In this paper a methodology for integrated multivariate monitoring and control of biological wastewater treatment plants during extreme events is presented. To monitor the process, on-line dynamic principal component analysis (PCA) is performed on the process data to extract the principal components that represent the underlying mechanisms of the process. Fuzzy c-means (FCM) clustering is used to classify the operational state. Performing clustering on scores from PCA solves computational problems as well as increases robustness due to noise attenuation. The class-membership information from FCM is used to derive adequate control set points for the local control loops. The methodology is illustrated by a simulation study of a biological wastewater treatment plant, on which disturbances of various types are imposed. The results show that the methodology can be used to determine and co-ordinate control actions in order to shift the control objective and improve the effluent quality.
Resumo:
1. There are a variety of methods that could be used to increase the efficiency of the design of experiments. However, it is only recently that such methods have been considered in the design of clinical pharmacology trials. 2. Two such methods, termed data-dependent (e.g. simulation) and data-independent (e.g. analytical evaluation of the information in a particular design), are becoming increasingly used as efficient methods for designing clinical trials. These two design methods have tended to be viewed as competitive, although a complementary role in design is proposed here. 3. The impetus for the use of these two methods has been the need for a more fully integrated approach to the drug development process that specifically allows for sequential development (i.e. where the results of early phase studies influence later-phase studies). 4. The present article briefly presents the background and theory that underpins both the data-dependent and -independent methods with the use of illustrative examples from the literature. In addition, the potential advantages and disadvantages of each method are discussed.