151 resultados para Process Modeling
Resumo:
Absorption kinetics of solutes given with the subcutaneous administration of fluids is ill-defined. The gamma emitter, technitium pertechnetate, enabled estimates of absorption rate to be estimated independently using two approaches. In the first approach, the counts remaining at the site were estimated by imaging above the subcutaneous administration site, whereas in the second approach, the plasma technetium concentration-time profiles were monitored up to 8 hr after technetium administration. Boluses of technetium pertechnetate were given both intravenously and subcutaneously on separate occasions with a multiple dosing regimen using three doses on each occasion. The disposition of technetium after iv administration was best described by biexponential kinetics with a V-ss of 0.30 +/- 0.11 L/kg and a clearance of 30.0 +/- 13.1 ml/min. The subcutaneous absorption kinetics was best described as a single exponential process with a half-life of 18.16 +/- 3.97 min by image analysis and a half-life of 11.58 +/- 2.48 min using plasma technetium time data. The bioavailability of technetium by the subcutaneous route was estimated to be 0.96 +/- 0.12. The absorption half-life showed no consistent change with the duration of the subcutaneous infusion. The amount remaining at the absorption site with time was similar when analyzed using image analysis, and plasma concentrations assuming multiexponential disposition kinetics and a first-order absorption process. Profiles of fraction remaining at the absorption sire generated by deconvolution analysis, image analysis, and assumption of a constant first-order absorption process were similar. Slowing of absorption from the subcutaneous administration site is apparent after the last bolus dose in three of the subjects and can De associated with the stopping of the infusion. In a fourth subject, the retention of technetium at the subcutaneous site is more consistent with accumulation of technetium near the absorption site as a result of systemic recirculation.
Resumo:
In this work, we present a systematic approach to the representation of modelling assumptions. Modelling assumptions form the fundamental basis for the mathematical description of a process system. These assumptions can be translated into either additional mathematical relationships or constraints between model variables, equations, balance volumes or parameters. In order to analyse the effect of modelling assumptions in a formal, rigorous way, a syntax of modelling assumptions has been defined. The smallest indivisible syntactical element, the so called assumption atom has been identified as a triplet. With this syntax a modelling assumption can be described as an elementary assumption, i.e. an assumption consisting of only an assumption atom or a composite assumption consisting of a conjunction of elementary assumptions. The above syntax of modelling assumptions enables us to represent modelling assumptions as transformations acting on the set of model equations. The notion of syntactical correctness and semantical consistency of sets of modelling assumptions is defined and necessary conditions for checking them are given. These transformations can be used in several ways and their implications can be analysed by formal methods. The modelling assumptions define model hierarchies. That is, a series of model families each belonging to a particular equivalence class. These model equivalence classes can be related to primal assumptions regarding the definition of mass, energy and momentum balance volumes and to secondary and tiertinary assumptions regarding the presence or absence and the form of mechanisms within the system. Within equivalence classes, there are many model members, these being related to algebraic model transformations for the particular model. We show how these model hierarchies are driven by the underlying assumption structure and indicate some implications on system dynamics and complexity issues. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Land related information about the Earth's surface is commonIJ found in two forms: (1) map infornlation and (2) satellite image da ta. Satellite imagery provides a good visual picture of what is on the ground but complex image processing is required to interpret features in an image scene. Increasingly, methods are being sought to integrate the knowledge embodied in mop information into the interpretation task, or, alternatively, to bypass interpretation and perform biophysical modeling directly on derived data sources. A cartographic modeling language, as a generic map analysis package, is suggested as a means to integrate geographical knowledge and imagery in a process-oriented view of the Earth. Specialized cartographic models may be developed by users, which incorporate mapping information in performing land classification. In addition, a cartographic modeling language may be enhanced with operators suited to processing remotely sensed imagery. We demonstrate the usefulness of a cartographic modeling language for pre-processing satellite imagery, and define two nerv cartographic operators that evaluate image neighborhoods as post-processing operations to interpret thematic map values. The language and operators are demonstrated with an example image classification task.
Resumo:
Within the information systems field, the task of conceptual modeling involves building a representation of selected phenomena in some domain. High-quality conceptual-modeling work is important because it facilitates early detection and correction of system development errors. It also plays an increasingly important role in activities like business process reengineering and documentation of best-practice data and process models in enterprise resource planning systems. Yet little research has been undertaken on many aspects of conceptual modeling. In this paper, we propose a framework to motivate research that addresses the following fundamental question: How can we model the world to better facilitate our developing, implementing, using, and maintaining more valuable information systems? The framework comprises four elements: conceptual-modeling grammars, conceptual-modeling methods, conceptual-modeling scripts, and conceptual-modeling contexts. We provide examples of the types of research that have already been undertaken on each element and illustrate research opportunities that exist.
Resumo:
The tensile deformation behavior of a range of supersaturated Mg-Al solid solutions and an as-cast magnesium alloy AM60 has been studied. The Mg-Al alloys were tested at room temperature while the alloy AM60 was tested in the temperature range 293-573 K. The differences in the deformation behavior of the alloys is discussed in terms of hardening and softening processes. In order to identify which processes were active, the stress dependence of the strain-hardening coefficient was assessed using Lukac and Balik's model of hardening and softening. The analysis indicates that hardening involves solid solution hardening and interaction with forest dislocations and non-dislocation obstacles such as second phase particles. Cross slip is not a significant recovery process in the temperature range 293-423 K. At temperatures between 473 and 523 K the analysis suggests that softening is controlled by cross slip and climb of dislocations. At temperatures above 523 K softening seems to be controlled by dynamic recrystallisation. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
We examine the transport of methane in microporous carbon by performing equilibrium and nonequilibrium molecular dynamics simulations over a range of pore sizes, densities, and temperatures. We interpret these simulation results using two models of the transport process. At low densities, we consider a molecular flow model, in which intermolecular interactions are neglected, and find excellent agreement between transport diffusion coefficients determined from simulation, and those predicted by the model. Simulation results indicate that the model can be applied up to fluid densities of the order to 0.1-1 nm(-3). Above these densities, we consider a slip flow model, combining hydrodynamic theory with a slip condition at the solid-fluid interface. As the diffusion coefficient at low densities can be accurately determined by the molecular flow model, we also consider a model where the slip condition is supplied by the molecular flow model. We find that both density-dependent models provide a useful means of estimating the transport coefficient that compares well with simulation. (C) 2004 American Institute of Physics.
Resumo:
Superplastic bulging is the most successful application of superplastic forming (SPF) in industry, but the non-uniform wall thickness distribution of parts formed by it is a common technical problem yet to be overcome. Based on a rigid-viscoplastic finite element program developed by the authors, for simulation of the sheet superplastic forming process combined with the prediction of microstructure variations (such as grain growth and cavity growth), a simple and efficient preform design method is proposed and applied to the design of preform mould for manufacturing parts with uniform wall thickness. Examples of formed parts are presented here to demonstrate that the technology can be used to improve the uniformity of wall thickness to meet practical requirements. (C) 2004 Elsevier B.V. All rights reserved.
Resumo:
Much research has been devoted over the years to investigating and advancing the techniques and tools used by analysts when they model. As opposed to what academics, software providers and their resellers promote as should be happening, the aim of this research was to determine whether practitioners still embraced conceptual modeling seriously. In addition, what are the most popular techniques and tools used for conceptual modeling? What are the major purposes for which conceptual modeling is used? The study found that the top six most frequently used modeling techniques and methods were ER diagramming, data flow diagramming, systems flowcharting, workflow modeling, UML, and structured charts. Modeling technique use was found to decrease significantly from smaller to medium-sized organizations, but then to increase significantly in larger organizations (proxying for large, complex projects). Technique use was also found to significantly follow an inverted U-shaped curve, contrary to some prior explanations. Additionally, an important contribution of this study was the identification of the factors that uniquely influence the decision of analysts to continue to use modeling, viz., communication (using diagrams) to/from stakeholders, internal knowledge (lack of) of techniques, user expectations management, understanding models' integration into the business, and tool/software deficiencies. The highest ranked purposes for which modeling was undertaken were database design and management, business process documentation, business process improvement, and software development. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
A steady state mathematical model for co-current spray drying was developed for sugar-rich foods with the application of the glass transition temperature concept. Maltodextrin-sucrose solution was used as a sugar-rich food model. The model included mass, heat and momentum balances for a single droplet drying as well as temperature and humidity profile of the drying medium. A log-normal volume distribution of the droplets was generated at the exit of the rotary atomizer. This generation created a certain number of bins to form a system of non-linear first-order differential equations as a function of the axial distance of the drying chamber. The model was used to calculate the changes of droplet diameter, density, temperature, moisture content and velocity in association with the change of air properties along the axial distance. The difference between the outlet air temperature and the glass transition temperature of the final products (AT) was considered as an indicator of stickiness of the particles in spray drying process. The calculated and experimental AT values were close, indicating successful validation of the model. (c) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The leaching of elements from the surface of charged fly ash particles is known to be an unsteady process. The mass transfer resistance provided by the diffuse double layer has been quantified as one of the reasons for this delayed leaching. In this work, a model based on mass transfer principles for predicting the concentration of calcium hydroxide in the diffuse double layer is presented. The significant difference between predicted calcium hydroxide concentration and the experimentally measured is explained.
Resumo:
We present a new method of modeling imaging of laser beams in the presence of diffraction. Our method is based on the concept of first orthogonally expanding the resultant diffraction field (that would have otherwise been obtained by the laborious application of the Huygens diffraction principle) and then representing it by an effective multimodal laser beam with different beam parameters. We show not only that the process of obtaining the new beam parameters is straightforward but also that it permits a different interpretation of the diffraction-caused focal shift in laser beams. All of the criteria that we have used to determine the minimum number of higher-order modes needed to accurately represent the diffraction field show that the mode-expansion method is numerically efficient. Finally, the characteristics of the mode-expansion method are such that it allows modeling of a vast array of diffraction problems, regardless of the characteristics of the incident laser beam, the diffracting element, or the observation plane. (C) 2005 Optical Society of America.
Resumo:
Increasingly, large areas of native tropical forests are being transformed into a mosaic of human dominated land uses with scattered mature remnants and secondary forests. In general, at the end of the land clearing process, the landscape will have two forest components: a stable component of surviving mature forests, and a dynamic component of secondary forests of different ages. As the proportion of mature forests continues to decline, secondary forests play an increasing role in the conservation and restoration of biodiversity. This paper aims to predict and explain spatial and temporal patterns in the age of remnant mature and secondary forests in lowland Colombian landscapes. We analyse the age distributions of forest fragments, using detailed temporal land cover data derived from aerial photographs. Ordinal logistic regression analysis was applied to model the spatial dynamics of mature and secondary forest patches. In particular, the effect of soil fertility, accessibility and auto-correlated neighbourhood terms on forest age and time of isolation of remnant patches was assessed. In heavily transformed landscapes, forests account for approximately 8% of the total landscape area, of which three quarters are comprised of secondary forests. Secondary forest growth adjacent to mature forest patches increases mean patch size and core area, and therefore plays an important ecological role in maintaining landscape structure. The regression models show that forest age is positively associated with the amount of neighbouring forest, and negatively associated with the amount of neighbouring secondary vegetation, so the older the forest is the less secondary vegetation there is adjacent to it. Accessibility and soil fertility also have a negative but variable influence on the age of forest remnants. The probability of future clearing if current conditions hold is higher for regenerated than mature forests. The challenge of biodiversity conservation and restoration in dynamic and spatially heterogeneous landscape mosaics composed of mature and secondary forests is discussed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The results presented in this report form a part of a larger global study on the major issues in BPM. Only one part of the larger study is reported here, viz. interviews with BPM experts. Interviews of BPM tool vendors together with focus groups involving user organizations, are continuing in parallel and will set the groundwork for the identification of BPM issues on a global scale via a survey (including a Delphi study). Through this multi-method approach, we identify four distinct sets of outcomes. First, as is the focus of this report, we identify the BPM issues as perceived by BPM experts. Second, the research design allows us to gain insight into the opinions of organisations deploying BPM solutions. Third, an understanding of organizations’ misconceptions of BPM technologies, as confronted by BPM tool vendors is obtained. Last, we seek to gain an understanding of BPM issues on a global scale, together with knowledge of matters of concern. This final outcome is aimed to produce an industry driven research agenda which will inform practitioners and in particular, the research community world-wide on issues and challenges that are prevalent or emerging in BPM and related areas.