343 resultados para Approximat Model (scheme)
em University of Queensland eSpace - Australia
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
The paper presents a theory for modeling flow in anisotropic, viscous rock. This theory has originally been developed for the simulation of large deformation processes including the folding and kinking of multi-layered visco-elastic rock (Muhlhaus et al. [1,2]). The orientation of slip planes in the context of crystallographic slip is determined by the normal vector - the director - of these surfaces. The model is applied to simulate anisotropic mantle convection. We compare the evolution of flow patterns, Nusselt number and director orientations for isotropic and anisotropic rheologies. In the simulations we utilize two different finite element methodologies: The Lagrangian Integration Point Method Moresi et al [8] and an Eulerian formulation, which we implemented into the finite element based pde solver Fastflo (www.cmis.csiro.au/Fastflo/). The reason for utilizing two different finite element codes was firstly to study the influence of an anisotropic power law rheology which currently is not implemented into the Lagrangian Integration point scheme [8] and secondly to study the numerical performance of Eulerian (Fastflo)- and Lagrangian integration schemes [8]. It turned out that whereas in the Lagrangian method the Nusselt number vs time plot reached only a quasi steady state where the Nusselt number oscillates around a steady state value the Eulerian scheme reaches exact steady states and produces a high degree of alignment (director orientation locally orthogonal to velocity vector almost everywhere in the computational domain). In the simulations emergent anisotropy was strongest in terms of modulus contrast in the up and down-welling plumes. Mechanisms for anisotropic material behavior in the mantle dynamics context are discussed by Christensen [3]. The dominant mineral phases in the mantle generally do not exhibit strong elastic anisotropy but they still may be oriented by the convective flow. Thus viscous anisotropy (the main focus of this paper) may or may not correlate with elastic or seismic anisotropy.
Resumo:
For a two layered long wave propagation, linearized governing equations, which were derived earlier from the Euler equations of mass and momentum assuming negligible friction and interfacial mixing are solved analytically using Fourier transform. For the solution, variations of upper layer water level is assumed to be sinosoidal having known amplitude and variations of interface level is solved. As the governing equations are too complex to solve it analytically, density of upper layer fluid is assumed as very close to the density of lower layer fluid to simplify the lower layer equation. A numerical model is developed using the staggered leap-forg scheme for computation of water level and discharge in one dimensional propagation having known amplitude for the variations of upper layer water level and interface level to be solved. For the numerical model, water levels (upper layer and interface) at both the boundaries are assumed to be known from analytical solution. Results of numerical model are verified by comparing with the analytical solutions for different time period. Good agreements between analytical solution and numerical model are found for the stated boundary condition. The reliability of the developed numerical model is discussed, using it for different a (ratio of density of fluid in the upper layer to that in the lower layer) and p (ratio of water depth in the lower layer to that in the upper layer) values. It is found that as ‘CX’ increases amplification of interface also increases for same upper layer amplitude. Again for a constant lower layer depth, as ‘p’ increases amplification of interface. also increases for same upper layer amplitude.
Resumo:
In most magnetic resonance imaging (MRI) systems, pulsed magnetic gradient fields induce eddy currents in the conducting structures of the superconducting magnet. The eddy currents induced in structures within the cryostat are particularly problematic as they are characterized by long time constants by virtue of the low resistivity of the conductors. This paper presents a three-dimensional (3-D) finite-difference time-domain (FDTD) scheme in cylindrical coordinates for eddy-current calculation in conductors. This model is intended to be part of a complete FDTD model of an MRI system including all RF and low-frequency field generating units and electrical models of the patient. The singularity apparent in the governing equations is removed by using a series expansion method and the conductor-air boundary condition is handled using a variant of the surface impedance concept. The numerical difficulty due to the asymmetry of Maxwell equations for low-frequency eddy-current problems is circumvented by taking advantage of the known penetration behavior of the eddy-current fields. A perfectly matched layer absorbing boundary condition in 3-D cylindrical coordinates is also incorporated. The numerical method has been verified against analytical solutions for simple cases. Finally, the algorithm is illustrated by modeling a pulsed field gradient coil system within an MRI magnet system. The results demonstrate that the proposed FDTD scheme can be used to calculate large-scale eddy-current problems in materials with high conductivity at low frequencies.
Resumo:
A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The particle-based lattice solid model developed to study the physics of rocks and the nonlinear dynamics of earthquakes is refined by incorporating intrinsic friction between particles. The model provides a means for studying the causes of seismic wave attenuation, as well as frictional heat generation, fault zone evolution, and localisation phenomena. A modified velocity-Verlat scheme that allows friction to be precisely modelled is developed. This is a difficult computational problem given that a discontinuity must be accurately simulated by the numerical approach (i.e., the transition from static to dynamical frictional behaviour). This is achieved using a half time step integration scheme. At each half time step, a nonlinear system is solved to compute the static frictional forces and states of touching particle-pairs. Improved efficiency is achieved by adaptively adjusting the time step increment, depending on the particle velocities in the system. The total energy is calculated and verified to remain constant to a high precision during simulations. Numerical experiments show that the model can be applied to the study of earthquake dynamics, the stick-slip instability, heat generation, and fault zone evolution. Such experiments may lead to a conclusive resolution of the heat flow paradox and improved understanding of earthquake precursory phenomena and dynamics. (C) 1999 Academic Press.
Resumo:
In this paper we present an algorithm as the combination of a low level morphological operation and model based Global Circular Shortest Path scheme to explore the segmentation of the Right Ventricle. Traditional morphological operations were employed to obtain the region of interest, and adjust it to generate a mask. The image cropped by the mask is then partitioned into a few overlapping regions. Global Circular Shortest Path algorithm is then applied to extract the contour from each partition. The final step is to re-assemble the partitions to create the whole contour. The technique is deemed quite reliable and robust, as this is illustrated by a very good agreement between the extracted contour and the expert manual drawing output.
Resumo:
The University of Queensland, Australia has developed Fez, a world-leading user-interface and management system for Fedora-based institutional repositories, which bridges the gap between a repository and users. Christiaan Kortekaas, Andrew Bennett and Keith Webster will review this open source software that gives institutions the power to create a comprehensive repository solution without the hassle..
Resumo:
We investigate here a modification of the discrete random pore model [Bhatia SK, Vartak BJ, Carbon 1996;34:1383], by including an additional rate constant which takes into account the different reactivity of the initial pore surface having attached functional groups and hydrogens, relative to the subsequently exposed surface. It is observed that the relative initial reactivity has a significant effect on the conversion and structural evolution, underscoring the importance of initial surface chemistry. The model is tested against experimental data on chemically controlled char oxidation and steam gasification at various temperatures. It is seen that the variations of the reaction rate and surface area with conversion are better represented by the present approach than earlier random pore models. The results clearly indicate the improvement of model predictions in the low conversion region, where the effect of the initially attached functional groups and hydrogens is more significant, particularly for char oxidation. It is also seen that, for the data examined, the initial surface chemistry is less important for steam gasification as compared to the oxidation reaction. Further development of the approach must also incorporate the dynamics of surface complexation, which is not considered here.
Resumo:
The classical model of surface layering followed by capillary condensation during adsorption in mesopores, is modified here by consideration of the adsorbate solid interaction potential. The new theory accurately predicts the capillary coexistence curve as well as pore criticality, matching that predicted by density functional theory. The model also satisfactorily predicts the isotherm for nitrogen adsorption at 77.4 K on MCM-41 material of various pore sizes, synthesized and characterized in our laboratory, including the multilayer region, using only data on the variation of condensation pressures with pore diameter. The results indicate a minimum mesopore diameter for the surface layering model to hold as 14.1 Å, below which size micropore filling must occur, and a minimum pore diameter for mechanical stability of the hemispherical meniscus during desorption as 34.2 Å. For pores in-between these two sizes reversible condensation is predicted to occur, in accord with the experimental data for nitrogen adsorption on MCM-41 at 77.4 K.
Resumo:
The detection of seizure in the newborn is a critical aspect of neurological research. Current automatic detection techniques are difficult to assess due to the problems associated with acquiring and labelling newborn electroencephalogram (EEG) data. A realistic model for newborn EEG would allow confident development, assessment and comparison of these detection techniques. This paper presents a model for newborn EEG that accounts for its self-similar and non-stationary nature. The model consists of background and seizure sub-models. The newborn EEG background model is based on the short-time power spectrum with a time-varying power law. The relationship between the fractal dimension and the power law of a power spectrum is utilized for accurate estimation of the short-time power law exponent. The newborn EEG seizure model is based on a well-known time-frequency signal model. This model addresses all significant time-frequency characteristics of newborn EEG seizure which include; multiple components or harmonics, piecewise linear instantaneous frequency laws and harmonic amplitude modulation. Estimates of the parameters of both models are shown to be random and are modelled using the data from a total of 500 background epochs and 204 seizure epochs. The newborn EEG background and seizure models are validated against real newborn EEG data using the correlation coefficient. The results show that the output of the proposed models has a higher correlation with real newborn EEG than currently accepted models (a 10% and 38% improvement for background and seizure models, respectively).
Resumo:
Discrete element method (DEM) modeling is used in parallel with a model for coalescence of deformable surface wet granules. This produces a method capable of predicting both collision rates and coalescence efficiencies for use in derivation of an overall coalescence kernel. These coalescence kernels can then be used in computationally efficient meso-scale models such as population balance equation (PBE) models. A soft-sphere DEM model using periodic boundary conditions and a unique boxing scheme was utilized to simulate particle flow inside a high-shear mixer. Analysis of the simulation results provided collision frequency, aggregation frequency, kinetic energy, coalescence efficiency and compaction rates for the granulation process. This information can be used to bridge the gap in multi-scale modeling of granulation processes between the micro-scale DEM/coalescence modeling approach and a meso-scale PBE modeling approach.
Resumo:
View of model for competition entry.
Resumo:
View of model for competition entry.