27 resultados para Data Driven Modeling
Resumo:
An extensive research program focused on the characterization of various metallurgical complex smelting and coal combustion slags is being undertaken. The research combines both experimental and thermodynamic modeling studies. The approach is illustrated by work on the PbO-ZnO-Al2O3-FeO-Fe2O3-CaO-SiO2 system. Experimental measurements of the liquidus and solidus have been undertaken under oxidizing and reducing conditions using equilibration, quenching, and electron probe X-ray microanalysis. The experimental program has been planned so as to obtain data for thermodynamic model development as well as for pseudo-ternary Liquidus diagrams that can be used directly by process operators. Thermodynamic modeling has been carried out using the computer system FACT, which contains thermodynamic databases with over 5000 compounds and evaluated solution models. The FACT package is used for the calculation of multiphase equilibria in multicomponent systems of industrial interest. A modified quasi-chemical solution model is used for the liquid slag phase. New optimizations have been carried out, which significantly improve the accuracy of the thermodynamic models for lead/zinc smelting and coal combustion processes. Examples of experimentally determined and calculated liquidus diagrams are presented. These examples provide information of direct relevance to various metallurgical smelting and coal combustion processes.
Resumo:
The infection of insect cells with baculovirus was described in a mathematical model as a part of the structured dynamic model describing whole animal cell metabolism. The model presented here is capable of simulating cell population dynamics, the concentrations of extracellular and intracellular viral components, and the heterologous product titers. The model describes the whole processes of viral infection and the effect of the infection on the host cell metabolism. Dynamic simulation of the model in batch and fed-batch mode gave good agreement between model predictions and experimental data. Optimum conditions for insect cell culture and viral infection in batch and fed-batch culture were studied using the model.
Resumo:
A numerical model of heat transfer in fluidized-bed coating of solid cylinders is presented. By defining suitable dimensionless parameters, the governing equations and its associated initial and boundary conditions are discretized using the method of orthogonal collocation and the resulting ordinary differential equations simultaneously solved for the dimensionless coating thickness and wall temperatures. Parametric Studies showed that the dimensionless coating thickness and wall temperature depend on the relative heat capacities of the polymer powder and object, the latent heat of fusion and the size of the cylinder. Model predictions for the coating thickness and wall temperature compare reasonably well with numerical predictions and experimental coating data in the literature and with our own coating experiments using copper cylinders immersed in nylon-11 and polyethylene powders. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The acceptance-probability-controlled simulated annealing with an adaptive move generation procedure, an optimization technique derived from the simulated annealing algorithm, is presented. The adaptive move generation procedure was compared against the random move generation procedure on seven multiminima test functions, as well as on the synthetic data, resembling the optical constants of a metal. In all cases the algorithm proved to have faster convergence and superior escaping from local minima. This algorithm was then applied to fit the model dielectric function to data for platinum and aluminum.
Resumo:
We propose a simulated-annealing-based genetic algorithm for solving model parameter estimation problems. The algorithm incorporates advantages of both genetic algorithms and simulated annealing. Tests on computer-generated synthetic data that closely resemble optical constants of a metal were performed to compare the efficiency of plain genetic algorithms against the simulated-annealing-based genetic algorithms. These tests assess the ability of the algorithms to and the global minimum and the accuracy of values obtained for model parameters. Finally, the algorithm with the best performance is used to fit the model dielectric function to data for platinum and aluminum. (C) 1997 Optical Society of America.
Resumo:
Absorption kinetics of solutes given with the subcutaneous administration of fluids is ill-defined. The gamma emitter, technitium pertechnetate, enabled estimates of absorption rate to be estimated independently using two approaches. In the first approach, the counts remaining at the site were estimated by imaging above the subcutaneous administration site, whereas in the second approach, the plasma technetium concentration-time profiles were monitored up to 8 hr after technetium administration. Boluses of technetium pertechnetate were given both intravenously and subcutaneously on separate occasions with a multiple dosing regimen using three doses on each occasion. The disposition of technetium after iv administration was best described by biexponential kinetics with a V-ss of 0.30 +/- 0.11 L/kg and a clearance of 30.0 +/- 13.1 ml/min. The subcutaneous absorption kinetics was best described as a single exponential process with a half-life of 18.16 +/- 3.97 min by image analysis and a half-life of 11.58 +/- 2.48 min using plasma technetium time data. The bioavailability of technetium by the subcutaneous route was estimated to be 0.96 +/- 0.12. The absorption half-life showed no consistent change with the duration of the subcutaneous infusion. The amount remaining at the absorption site with time was similar when analyzed using image analysis, and plasma concentrations assuming multiexponential disposition kinetics and a first-order absorption process. Profiles of fraction remaining at the absorption sire generated by deconvolution analysis, image analysis, and assumption of a constant first-order absorption process were similar. Slowing of absorption from the subcutaneous administration site is apparent after the last bolus dose in three of the subjects and can De associated with the stopping of the infusion. In a fourth subject, the retention of technetium at the subcutaneous site is more consistent with accumulation of technetium near the absorption site as a result of systemic recirculation.
Resumo:
Land related information about the Earth's surface is commonIJ found in two forms: (1) map infornlation and (2) satellite image da ta. Satellite imagery provides a good visual picture of what is on the ground but complex image processing is required to interpret features in an image scene. Increasingly, methods are being sought to integrate the knowledge embodied in mop information into the interpretation task, or, alternatively, to bypass interpretation and perform biophysical modeling directly on derived data sources. A cartographic modeling language, as a generic map analysis package, is suggested as a means to integrate geographical knowledge and imagery in a process-oriented view of the Earth. Specialized cartographic models may be developed by users, which incorporate mapping information in performing land classification. In addition, a cartographic modeling language may be enhanced with operators suited to processing remotely sensed imagery. We demonstrate the usefulness of a cartographic modeling language for pre-processing satellite imagery, and define two nerv cartographic operators that evaluate image neighborhoods as post-processing operations to interpret thematic map values. The language and operators are demonstrated with an example image classification task.
Resumo:
Within the information systems field, the task of conceptual modeling involves building a representation of selected phenomena in some domain. High-quality conceptual-modeling work is important because it facilitates early detection and correction of system development errors. It also plays an increasingly important role in activities like business process reengineering and documentation of best-practice data and process models in enterprise resource planning systems. Yet little research has been undertaken on many aspects of conceptual modeling. In this paper, we propose a framework to motivate research that addresses the following fundamental question: How can we model the world to better facilitate our developing, implementing, using, and maintaining more valuable information systems? The framework comprises four elements: conceptual-modeling grammars, conceptual-modeling methods, conceptual-modeling scripts, and conceptual-modeling contexts. We provide examples of the types of research that have already been undertaken on each element and illustrate research opportunities that exist.
Resumo:
With the advent of object-oriented languages and the portability of Java, the development and use of class libraries has become widespread. Effective class reuse depends on class reliability which in turn depends on thorough testing. This paper describes a class testing approach based on modeling each test case with a tuple and then generating large numbers of tuples to thoroughly cover an input space with many interesting combinations of values. The testing approach is supported by the Roast framework for the testing of Java classes. Roast provides automated tuple generation based on boundary values, unit operations that support driver standardization, and test case templates used for code generation. Roast produces thorough, compact test drivers with low development and maintenance cost. The framework and tool support are illustrated on a number of non-trivial classes, including a graphical user interface policy manager. Quantitative results are presented to substantiate the practicality and effectiveness of the approach. Copyright (C) 2002 John Wiley Sons, Ltd.
Resumo:
A thermodynamic approach is developed in this paper to describe the behavior of a subcritical fluid in the neighborhood of vapor-liquid interface and close to a graphite surface. The fluid is modeled as a system of parallel molecular layers. The Helmholtz free energy of the fluid is expressed as the sum of the intrinsic Helmholtz free energies of separate layers and the potential energy of their mutual interactions calculated by the 10-4 potential. This Helmholtz free energy is described by an equation of state (such as the Bender or Peng-Robinson equation), which allows us a convenient means to obtain the intrinsic Helmholtz free energy of each molecular layer as a function of its two-dimensional density. All molecular layers of the bulk fluid are in mechanical equilibrium corresponding to the minimum of the total potential energy. In the case of adsorption the external potential exerted by the graphite layers is added to the free energy. The state of the interface zone between the liquid and the vapor phases or the state of the adsorbed phase is determined by the minimum of the grand potential. In the case of phase equilibrium the approach leads to the distribution of density and pressure over the transition zone. The interrelation between the collision diameter and the potential well depth was determined by the surface tension. It was shown that the distance between neighboring molecular layers substantially changes in the vapor-liquid transition zone and in the adsorbed phase with loading. The approach is considered in this paper for the case of adsorption of argon and nitrogen on carbon black. In both cases an excellent agreement with the experimental data was achieved without additional assumptions and fitting parameters, except for the fluid-solid potential well depth. The approach has far-reaching consequences and can be readily extended to the model of adsorption in slit pores of carbonaceous materials and to the analysis of multicomponent adsorption systems. (C) 2002 Elsevier Science (USA).
Resumo:
Measurement of exchange of substances between blood and tissue has been a long-lasting challenge to physiologists, and considerable theoretical and experimental accomplishments were achieved before the development of the positron emission tomography (PET). Today, when modeling data from modern PET scanners, little use is made of earlier microvascular research in the compartmental models, which have become the standard model by which the vast majority of dynamic PET data are analysed. However, modern PET scanners provide data with a sufficient temporal resolution and good counting statistics to allow estimation of parameters in models with more physiological realism. We explore the standard compartmental model and find that incorporation of blood flow leads to paradoxes, such as kinetic rate constants being time-dependent, and tracers being cleared from a capillary faster than they can be supplied by blood flow. The inability of the standard model to incorporate blood flow consequently raises a need for models that include more physiology, and we develop microvascular models which remove the inconsistencies. The microvascular models can be regarded as a revision of the input function. Whereas the standard model uses the organ inlet concentration as the concentration throughout the vascular compartment, we consider models that make use of spatial averaging of the concentrations in the capillary volume, which is what the PET scanner actually registers. The microvascular models are developed for both single- and multi-capillary systems and include effects of non-exchanging vessels. They are suitable for analysing dynamic PET data from any capillary bed using either intravascular or diffusible tracers, in terms of physiological parameters which include regional blood flow. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
[1] Comprehensive measurements are presented of the piezometric head in an unconfined aquifer during steady, simple harmonic oscillations driven by a hydrostatic clear water reservoir through a vertical interface. The results are analyzed and used to test existing hydrostatic and nonhydrostatic, small-amplitude theories along with capillary fringe effects. As expected, the amplitude of the water table wave decays exponentially. However, the decay rates and phase lags indicate the influence of both vertical flow and capillary effects. The capillary effects are reconciled with observations of water table oscillations in a sand column with the same sand. The effects of vertical flows and the corresponding nonhydrostatic pressure are reasonably well described by small-amplitude theory for water table waves in finite depth aquifers. That includes the oscillation amplitudes being greater at the bottom than at the top and the phase lead of the bottom compared with the top. The main problems with respect to interpreting the measurements through existing theory relate to the complicated boundary condition at the interface between the driving head reservoir and the aquifer. That is, the small-amplitude, finite depth expansion solution, which matches a hydrostatic boundary condition between the bottom and the mean driving head level, is unrealistic with respect to the pressure variation above this level. Hence it cannot describe the finer details of the multiple mode behavior close to the driving head boundary. The mean water table height initially increases with distance from the forcing boundary but then decreases again, and its asymptotic value is considerably smaller than that previously predicted for finite depth aquifers without capillary effects. Just as the mean water table over-height is smaller than predicted by capillarity-free shallow aquifer models, so is the amplitude of the second harmonic. In fact, there is no indication of extra second harmonics ( in addition to that contained in the driving head) being generated at the interface or in the interior.