64 resultados para User Modeling
em University of Queensland eSpace - Australia
Resumo:
Cpfg is a program for simulating and visualizing plant development, based on the theory of L-systems. A special-purpose programming language, used to specify plant models, is an essential feature of cpfg. We review postulates of L-system theory that have influenced the design of this language. We then present the main constructs of this language, and evaluate it from a user's perspective.
Resumo:
L-studio/cpfg is a plant modeling software system designed for Windows 95/98/NT platforms. Its key components are the L-system-based plant simulator cpfg and the modeling environment called L-studio. We overview version 1.0 of this system from the user's perspective.
Resumo:
The advantages of antennas that can resemble the shape of the body to which they are attached are obvious. However, electromagnetic modeling of such unusually shaped antennas can be difficult. In this paper, the commercially available software SolidWorks(TM) is used for accurately drawing complex shapes in conjunction with the electromagnetic software FEKO(TM) to model the EM behavior of conformal antennas. The application of SolidWorks and custom-written software allows all the required information that forms the analyzed structure to be automatically inserted into FEKO, and gives the user complete control over the antenna being modeled. This approach is illustrated by a number of simulation examples of single, wideband, multi-band planar and curved patch antennas.
Resumo:
Much research has been devoted over the years to investigating and advancing the techniques and tools used by analysts when they model. As opposed to what academics, software providers and their resellers promote as should be happening, the aim of this research was to determine whether practitioners still embraced conceptual modeling seriously. In addition, what are the most popular techniques and tools used for conceptual modeling? What are the major purposes for which conceptual modeling is used? The study found that the top six most frequently used modeling techniques and methods were ER diagramming, data flow diagramming, systems flowcharting, workflow modeling, UML, and structured charts. Modeling technique use was found to decrease significantly from smaller to medium-sized organizations, but then to increase significantly in larger organizations (proxying for large, complex projects). Technique use was also found to significantly follow an inverted U-shaped curve, contrary to some prior explanations. Additionally, an important contribution of this study was the identification of the factors that uniquely influence the decision of analysts to continue to use modeling, viz., communication (using diagrams) to/from stakeholders, internal knowledge (lack of) of techniques, user expectations management, understanding models' integration into the business, and tool/software deficiencies. The highest ranked purposes for which modeling was undertaken were database design and management, business process documentation, business process improvement, and software development. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
This report describes recent updates to the custom-built data-acquisition hardware operated by the Center for Hypersonics. In 2006, an ISA-to-USB bridging card was developed as part of Luke Hillyard's final-year thesis. This card allows the hardware to be connected to any recent personal computers via a (USB or RS232) serial port and it provides a number of simple text-based commands for control of the hardware. A graphical user interface program was also updated to help the experimenter manage the data acquisition functions. Sampled data is stored in text files that have been compressed with the gzip for mat. To simplify the later archiving or transport of the data, all files specific to a shot are stored in a single directory. This includes a text file for the run description, the signal configuration file and the individual sampled-data files, one for each signal that was recorded.
Resumo:
Business process design is primarily driven by process improvement objectives. However, the role of control objectives stemming from regulations and standards is becoming increasingly important for businesses in light of recent events that led to some of the largest scandals in corporate history. As organizations strive to meet compliance agendas, there is an evident need to provide systematic approaches that assist in the understanding of the interplay between (often conflicting) business and control objectives during business process design. In this paper, our objective is twofold. We will firstly present a research agenda in the space of business process compliance, identifying major technical and organizational challenges. We then tackle a part of the overall problem space, which deals with the effective modeling of control objectives and subsequently their propagation onto business process models. Control objective modeling is proposed through a specialized modal logic based on normative systems theory, and the visualization of control objectives on business process models is achieved procedurally. The proposed approach is demonstrated in the context of a purchase-to-pay scenario.
Resumo:
OctVCE is a cartesian cell CFD code produced especially for numerical simulations of shock and blast wave interactions with complex geometries. Virtual Cell Embedding (VCE) was chosen as its cartesian cell kernel as it is simple to code and sufficient for practical engineering design problems. This also makes the code much more ‘user-friendly’ than structured grid approaches as the gridding process is done automatically. The CFD methodology relies on a finite-volume formulation of the unsteady Euler equations and is solved using a standard explicit Godonov (MUSCL) scheme. Both octree-based adaptive mesh refinement and shared-memory parallel processing capability have also been incorporated. For further details on the theory behind the code, see the companion report 2007/12.
Resumo:
Ex vivo hematopoiesis is increasingly used for clinical applications. Models of ex vivo hematopoiesis are required to better understand the complex dynamics and to optimize hematopoietic culture processes. A general mathematical modeling framework is developed which uses traditional chemical engineering metaphors to describe the complex hematopoietic dynamics. Tanks and tubular reactors are used to describe the (pseudo-) stochastic and deterministic elements of hematopoiesis, respectively. Cells at any point in the differentiation process can belong to either an immobilized, inert phase (quiescent cells) or a mobile, active phase (cycling cells). The model describes five processes: (1) flow (differentiation), (2) autocatalytic formation (growth),(3) degradation (death), (4) phase transition from immobilized to mobile phase (quiescent to cycling transition), and (5) phase transition from mobile to immobilized phase (cycling to quiescent transition). The modeling framework is illustrated with an example concerning the effect of TGF-beta 1 on erythropoiesis. (C) 1998 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
A new model proposed for the gasification of chars and carbons incorporates features of the turbostratic nanoscale structure that exists in such materials. The model also considers the effect of initial surface chemistry and different reactivities perpendicular to the edges and to the faces of the underlying crystallite planes comprising the turbostratic structure. It may be more realistic than earlier models based on pore or grain structure idealizations when the carbon contains large amounts of crystallite matter. Shrinkage of the carbon particles in the chemically controlled regime is also possible due to the random complete gasification of crystallitic planes. This mechanism can explain observations in the literature of particle size reduction. Based on the model predictions, both initial surface chemistry and the number of stacked planes in the crystallites strongly influence the reactivity and particle shrinkage. Its test results agree well with literature data on the air-oxidation of Spherocarb and show that it accurately predicts the variation of particle size with conversion. Model parameters are determined entirely from rate measurements.
Resumo:
An extension of the Adachi model with the adjustable broadening function, instead of the Lorentzian one, is employed to model the optical constants of GaP, InP, and InAs. Adjustable broadening is modeled by replacing the damping constant with the frequency-dependent expression. The improved flexibility of the model enables achieving an excellent agreement with the experimental data. The relative rms errors obtained for the refractive index equal 1.2% for GaP, 1.0% for InP, and 1.6% for InAs. (C) 1999 American Institute of Physics. [S0021-8979(99)05807-7].
Resumo:
An analytical approach to the stress development in the coherent dendritic network during solidification is proposed. Under the assumption that stresses are developed in the network as a result of the friction resisting shrinkage-induced interdendritic fluid flow, the model predicts the stresses in the solid. The calculations reflect the expected effects of postponed dendrite coherency, slower solidification conditions, and variations of eutectic volume fraction and shrinkage. Comparing the calculated stresses to the measured shear strength of equiaxed mushy zones shows that it is possible for the stresses to exceed the strength, thereby resulting in reorientation or collapse of the dendritic network.
Resumo:
The extension of Adachi's model with a Gaussian-like broadening function, in place of Lorentzian, is used to model the optical dielectric function of the alloy AlxGa1-xAs. Gaussian-like broadening is accomplished by replacing the damping constant in the Lorentzian line shape with a frequency dependent expression. In this way, the comparative simplicity of the analytic formulas of the model is preserved, while the accuracy becomes comparable to that of more intricate models, and/or models with significantly more parameters. The employed model accurately describes the optical dielectric function in the spectral range from 1.5 to 6.0 eV within the entire alloy composition range. The relative rms error obtained for the refractive index is below 2.2% for all compositions. (C) 1999 American Institute of Physics. [S0021-8979(99)00512-5].
Resumo:
Optical constants of AlSb, GaSb, and InSb are modeled in the 1-6 eV spectral range. We employ an extension of Adachi's model of the optical constants of semiconductors. The model takes into account transitions at E-0, E-0 + Delta(0), E-1, and E-1 + Delta(1) critical points, as well as higher-lying transitions which are modeled with three damped harmonic oscillators. We do not consider indirect transitions contribution, since it represents a second-order perturbation and its strength should be low. Also, we do not take into account excitonic effects at E-1, E-1 + Delta(1) critical points, since we model the room temperature data. In spite of fewer contributions to the dielectric function compared to previous calculations involving Adachi's model, our calculations show significantly improved agreement with the experimental data. This is due to the two main distinguishing features of calculations presented here: use of adjustable line broadening instead of the conventional Lorentzian one, and employment of a global optimization routine for model parameter determination.
Resumo:
The conventional convection-dispersion (also called axial dispersion) model is widely used to interrelate hepatic availability (F) and clearance (Cl) with the morphology and physiology of the liver and to predict effects such as changes in liver blood flow on F and Cl. An extended form of the convection-dispersion model has been developed to adequately describe the outflow concentration-time profiles for vascular markers at both short and long times after bolus injections into perfused livers. The model, based on flux concentration and a convolution of catheters and large vessels, assumes that solute elimination in hepatocytes follows either fast distribution into or radial diffusion in hepatocytes. The model includes a secondary vascular compartment, postulated to be interconnecting sinusoids. Analysis of the mean hepatic transit time (MTT) and normalized variance (CV2) of solutes with extraction showed that the discrepancy between the predictions of MTT and CV2 for the extended and conventional models are essentially identical irrespective of the magnitude of rate constants representing permeability, volume, and clearance parameters, providing that there is significant hepatic extraction. In conclusion, the application of a newly developed extended convection-dispersion model has shown that the unweighted conventional convection-dispersion model can be used to describe the disposition of extracted solutes and, in particular, to estimate hepatic availability and clearance in booth experimental and clinical situations.