71 resultados para order fulfillment process
em University of Queensland eSpace - Australia
Resumo:
We consider the electronic properties of layered molecular crystals of the type theta -D(2)A where A is an anion and D is a donor molecule such as bis-(ethylenedithia-tetrathiafulvalene) (BEDT-TTF), which is arranged in the theta -type pattern within the layers. We argue that the simplest strongly correlated electron model that can describe the rich phase diagram of these materials is the extended Hubbard model on the square lattice at one-quarter filling. In the limit where the Coulomb repulsion on a single site is large, the nearest-neighbor Coulomb repulsion V plays a crucial role. When V is much larger than the intermolecular hopping integral t the ground state is an insulator with charge ordering. In this phase antiferromagnetism arises due to a novel fourth-order superexchange process around a plaquette on the square lattice. We argue that the charge ordered phase is destroyed below a critical nonzero value V, of the order of t. Slave-boson theory is used to explicitly demonstrate this for the SU(N) generalization of the model, in the large-N limit. We also discuss the relevance of the model to the all-organic family beta-(BEDT-TTF)(2)SF5YSO3 where Y=CH2CF2, CH2, CHF.
Resumo:
Absorption kinetics of solutes given with the subcutaneous administration of fluids is ill-defined. The gamma emitter, technitium pertechnetate, enabled estimates of absorption rate to be estimated independently using two approaches. In the first approach, the counts remaining at the site were estimated by imaging above the subcutaneous administration site, whereas in the second approach, the plasma technetium concentration-time profiles were monitored up to 8 hr after technetium administration. Boluses of technetium pertechnetate were given both intravenously and subcutaneously on separate occasions with a multiple dosing regimen using three doses on each occasion. The disposition of technetium after iv administration was best described by biexponential kinetics with a V-ss of 0.30 +/- 0.11 L/kg and a clearance of 30.0 +/- 13.1 ml/min. The subcutaneous absorption kinetics was best described as a single exponential process with a half-life of 18.16 +/- 3.97 min by image analysis and a half-life of 11.58 +/- 2.48 min using plasma technetium time data. The bioavailability of technetium by the subcutaneous route was estimated to be 0.96 +/- 0.12. The absorption half-life showed no consistent change with the duration of the subcutaneous infusion. The amount remaining at the absorption site with time was similar when analyzed using image analysis, and plasma concentrations assuming multiexponential disposition kinetics and a first-order absorption process. Profiles of fraction remaining at the absorption sire generated by deconvolution analysis, image analysis, and assumption of a constant first-order absorption process were similar. Slowing of absorption from the subcutaneous administration site is apparent after the last bolus dose in three of the subjects and can De associated with the stopping of the infusion. In a fourth subject, the retention of technetium at the subcutaneous site is more consistent with accumulation of technetium near the absorption site as a result of systemic recirculation.
Resumo:
An important consideration in the development of mathematical models for dynamic simulation, is the identification of the appropriate mathematical structure. By building models with an efficient structure which is devoid of redundancy, it is possible to create simple, accurate and functional models. This leads not only to efficient simulation, but to a deeper understanding of the important dynamic relationships within the process. In this paper, a method is proposed for systematic model development for startup and shutdown simulation which is based on the identification of the essential process structure. The key tool in this analysis is the method of nonlinear perturbations for structural identification and model reduction. Starting from a detailed mathematical process description both singular and regular structural perturbations are detected. These techniques are then used to give insight into the system structure and where appropriate to eliminate superfluous model equations or reduce them to other forms. This process retains the ability to interpret the reduced order model in terms of the physico-chemical phenomena. Using this model reduction technique it is possible to attribute observable dynamics to particular unit operations within the process. This relationship then highlights the unit operations which must be accurately modelled in order to develop a robust plant model. The technique generates detailed insight into the dynamic structure of the models providing a basis for system re-design and dynamic analysis. The technique is illustrated on the modelling for an evaporator startup. Copyright (C) 1996 Elsevier Science Ltd
Resumo:
In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and methods for checking them are described. It is shown on a simple example how different modelling assumptions act on the model equations and their effect on the differential index of the resulted model is also indicated.
Resumo:
This work studied the structure-hepatic disposition relationships for cationic drugs of varying lipophilicity using a single-pass, in situ rat liver preparation. The lipophilicity among the cationic drugs studied in this work is in the following order: diltiazem. propranolol. labetalol. prazosin. antipyrine. atenolol. Parameters characterizing the hepatic distribution and elimination kinetics of the drugs were estimated using the multiple indicator dilution method. The kinetic model used to describe drug transport (the two-phase stochastic model) integrated cytoplasmic binding kinetics and belongs to the class of barrier-limited and space-distributed liver models. Hepatic extraction ratio (E) (0.30-0.92) increased with lipophilicity. The intracellular binding rate constant (k(on)) and the equilibrium amount ratios characterizing the slowly and rapidly equilibrating binding sites (K-S and K-R) increase with the lipophilicity of drug (k(on) : 0.05-0.35 s(-1); K-S : 0.61-16.67; K-R : 0.36-0.95), whereas the intracellular unbinding rate constant (k(off)) decreases with the lipophilicity of drug (0.081-0.021 s(-1)). The partition ratio of influx (k(in)) and efflux rate constant (k(out)), k(in)/k(out), increases with increasing pK(a) value of the drug [from 1.72 for antipyrine (pK(a) = 1.45) to 9.76 for propranolol (pK(a) = 9.45)], the differences in k(in/kout) for the different drugs mainly arising from ion trapping in the mitochondria and lysosomes. The value of intrinsic elimination clearance (CLint), permeation clearance (CLpT), and permeability-surface area product (PS) all increase with the lipophilicity of drug [CLint (ml . min(-1) . g(-1) of liver): 10.08-67.41; CLpT (ml . min(-1) . g(-1) of liver): 10.80-5.35; PS (ml . min(-1) . g(-1) of liver): 14.59-90.54]. It is concluded that cationic drug kinetics in the liver can be modeled using models that integrate the presence of cytoplasmic binding, a hepatocyte barrier, and a vascular transit density function.
Resumo:
In this work, we present a systematic approach to the representation of modelling assumptions. Modelling assumptions form the fundamental basis for the mathematical description of a process system. These assumptions can be translated into either additional mathematical relationships or constraints between model variables, equations, balance volumes or parameters. In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The smallest indivisible syntactical element, the so called assumption atom has been identified as a triplet. With this syntax a modelling assumption can be described as an elementary assumption, i.e. an assumption consisting of only an assumption atom or a composite assumption consisting of a conjunction of elementary assumptions. The above syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and necessary conditions for checking them are given. These transformations can be used in several ways and their implications can be analysed by formal methods. The modelling assumptions define model hierarchies. That is, a series of model families each belonging to a particular equivalence class. These model equivalence classes can be related to primal assumptions regarding the definition of mass, energy and momentum balance volumes and to secondary and tiertinary assumptions regarding the presence or absence and the form of mechanisms within the system. Within equivalence classes, there are many model members, these being related to algebraic model transformations for the particular model. We show how these model hierarchies are driven by the underlying assumption structure and indicate some implications on system dynamics and complexity issues. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
We compare the performance of two different low-storage filter diagonalisation (LSFD) strategies in the calculation of complex resonance energies of the HO2, radical. The first is carried out within a complex-symmetric Lanczos subspace representation [H. Zhang, S.C. Smith, Phys. Chem. Chem. Phys. 3 (2001) 2281]. The second involves harmonic inversion of a real autocorrelation function obtained via a damped Chebychev recursion [V.A. Mandelshtam, H.S. Taylor, J. Chem. Phys. 107 (1997) 6756]. We find that while the Chebychev approach has the advantage of utilizing real algebra in the time-consuming process of generating the vector recursion, the Lanczos, method (using complex vectors) requires fewer iterations, especially for low-energy part of the spectrum. The overall efficiency in calculating resonances for these two methods is comparable for this challenging system. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This paper addresses robust model-order reduction of a high dimensional nonlinear partial differential equation (PDE) model of a complex biological process. Based on a nonlinear, distributed parameter model of the same process which was validated against experimental data of an existing, pilot-scale BNR activated sludge plant, we developed a state-space model with 154 state variables in this work. A general algorithm for robustly reducing the nonlinear PDE model is presented and based on an investigation of five state-of-the-art model-order reduction techniques, we are able to reduce the original model to a model with only 30 states without incurring pronounced modelling errors. The Singular perturbation approximation balanced truncating technique is found to give the lowest modelling errors in low frequency ranges and hence is deemed most suitable for controller design and other real-time applications. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
A steady state mathematical model for co-current spray drying was developed for sugar-rich foods with the application of the glass transition temperature concept. Maltodextrin-sucrose solution was used as a sugar-rich food model. The model included mass, heat and momentum balances for a single droplet drying as well as temperature and humidity profile of the drying medium. A log-normal volume distribution of the droplets was generated at the exit of the rotary atomizer. This generation created a certain number of bins to form a system of non-linear first-order differential equations as a function of the axial distance of the drying chamber. The model was used to calculate the changes of droplet diameter, density, temperature, moisture content and velocity in association with the change of air properties along the axial distance. The difference between the outlet air temperature and the glass transition temperature of the final products (AT) was considered as an indicator of stickiness of the particles in spray drying process. The calculated and experimental AT values were close, indicating successful validation of the model. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Resumo:
The conceptual complexity of problems was manipulated to probe the limits of human information processing capacity. Participants were asked to interpret graphically displayed statistical interactions. In such problems, all independent variables need to be considered together, so that decomposition into smaller subtasks is constrained, and thus the order of the interaction. directly determines conceptual complexity. As the order of the interaction increases, the number of variables increases. Results showed a significant decline in accuracy and speed of solution from three-way to four-way interactions. Furthermore, performance on a five-way interaction was at chance level. These findings suggest that a structure defined on four variables is at the limit of human processing capacity.
Resumo:
A heat transfer coefficient gauge has been built, obeying particular rules in order to ensure the relevance and accuracy of the collected information. The gauge body is made out of the same materials as the die casting die (H13). It is equipped with six thermocouples located at different depths in the body and with a sapphire light pipe. The light pipe is linked to an optic fibre, which is connected to a monochromatic pyrometer. Thermocouples and pyrometer measurements are recorded with a data logger. A high pressure die casting die was instrumented with one such gauge. A set of 150 castings was done and the data recorded. During the casting, some process parameters have been modified such as piston velocity, intensification pressure, delay before switch to the intensification stage, temperature of the alloy, etc.... The data was treated with an inverse method in order to transform temperature measurements into heat flux density and heat transfer coefficient plots. The piston velocity and the initial temperature of the die seem to be the process parameters that have the greatest influence on the heat transfer. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Most adverse environmental impacts result from design decisions made long before manufacturing or usage. In order to prevent this situation, several authors have proposed the application of life cycle assessment (LCA) at the very first phases of the design of a process, a product or a service. The study in this paper presents an innovative thermal drying process for sewage sludge called fry-drying, in which dewatered sludge is directly contacted in the dryer with hot recycled cooking oils (RCO) as the heat medium. Considering the practical difficulties for the disposal of these two wastes, fry-drying presents a potentially convenient method for their combined elimination by incineration of the final fry-dried sludge. An analytical comparison between a conventional drying process and the new proposed fry-drying process is reported, with reference to some environmental impact categories. The results of this study, applied at the earliest stages of the design of the process, assist evaluation of the feasibility of such system compared to a current disposal process for the drying and incineration of sewage sludge.
Resumo:
In the last decade, with the expansion of organizational scope and the tendency for outsourcing, there has been an increasing need for Business Process Integration (BPI), understood as the sharing of data and applications among business processes. The research efforts and development paths in BPI pursued by many academic groups and system vendors, targeting heterogeneous system integration, continue to face several conceptual and technological challenges. This article begins with a brief review of major approaches and emerging standards to address BPI. Further, we introduce a rule-driven messaging approach to BPI, which is based on the harmonization of messages in order to compose a new, often cross-organizational process. We will then introduce the design of a temporal first order language (Harmonized Messaging Calculus) that provides the formal foundation for general rules governing the business process execution. Definitions of the language terms, formulae, safety, and expressiveness are introduced and considered in detail.
Resumo:
Molecular interactions between microcrystalline cellulose (MCC) and water were investigated by attenuated total reflection infrared (ATR/IR) spectroscopy. Moisture-content-dependent IR spectra during a drying process of wet MCC were measured. In order to distinguish overlapping O–H stretching bands arising from both cellulose and water, principal component analysis (PCA) and, generalized two-dimensional correlation spectroscopy (2DCOS) and second derivative analysis were applied to the obtained spectra. Four typical drying stages were clearly separated by PCA, and spectral variations in each stage were analyzed by 2DCOS. In the drying time range of 0–41 min, a decrease in the broad band around 3390 cm−1 was observed, indicating that bulk water was evaporated. In the drying time range of 49–195 min, decreases in the bands at 3412, 3344 and 3286 cm−1 assigned to the O6H6cdots, three dots, centeredO3′ interchain hydrogen bonds (H-bonds), the O3H3cdots, three dots, centeredO5 intrachain H-bonds and the H-bonds in Iβ phase in MCC, respectively, were observed. The result of the second derivative analysis suggests that water molecules mainly interact with the O6H6cdots, three dots, centeredO3′ interchain H-bonds. Thus, the H-bonding network in MCC is stabilized by H-bonds between OH groups constructing O6H6cdots, three dots, centeredO3′ interchain H-bonds and water, and the removal of the water molecules induces changes in the H-bonding network in MCC.