23 resultados para Minimization Problem, Lattice Model

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A calibration methodology based on an efficient and stable mathematical regularization scheme is described. This scheme is a variant of so-called Tikhonov regularization in which the parameter estimation process is formulated as a constrained minimization problem. Use of the methodology eliminates the need for a modeler to formulate a parsimonious inverse problem in which a handful of parameters are designated for estimation prior to initiating the calibration process. Instead, the level of parameter parsimony required to achieve a stable solution to the inverse problem is determined by the inversion algorithm itself. Where parameters, or combinations of parameters, cannot be uniquely estimated, they are provided with values, or assigned relationships with other parameters, that are decreed to be realistic by the modeler. Conversely, where the information content of a calibration dataset is sufficient to allow estimates to be made of the values of many parameters, the making of such estimates is not precluded by preemptive parsimonizing ahead of the calibration process. White Tikhonov schemes are very attractive and hence widely used, problems with numerical stability can sometimes arise because the strength with which regularization constraints are applied throughout the regularized inversion process cannot be guaranteed to exactly complement inadequacies in the information content of a given calibration dataset. A new technique overcomes this problem by allowing relative regularization weights to be estimated as parameters through the calibration process itself. The technique is applied to the simultaneous calibration of five subwatershed models, and it is demonstrated that the new scheme results in a more efficient inversion, and better enforcement of regularization constraints than traditional Tikhonov regularization methodologies. Moreover, it is argued that a joint calibration exercise of this type results in a more meaningful set of parameters than can be achieved by individual subwatershed model calibration. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitatively predicting mass transport rates for chemical mixtures in porous materials is important in applications of materials such as adsorbents, membranes, and catalysts. Because directly assessing mixture transport experimentally is challenging, theoretical models that can predict mixture diffusion coefficients using Only single-component information would have many uses. One such model was proposed by Skoulidas, Sholl, and Krishna (Langmuir, 2003, 19, 7977), and applications of this model to a variety of chemical mixtures in nanoporous materials have yielded promising results. In this paper, the accuracy of this model for predicting mixture diffusion coefficients in materials that exhibit a heterogeneous distribution of local binding energies is examined. To examine this issue, single-component and binary mixture diffusion coefficients are computed using kinetic Monte Carlo for a two-dimensional lattice model over a wide range of lattice occupancies and compositions. The approach suggested by Skoulidas, Sholl, and Krishna is found to be accurate in situations where the spatial distribution of binding site energies is relatively homogeneous, but is considerably less accurate for strongly heterogeneous energy distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use series expansion methods to calculate the dispersion relation of the one-magnon excitations for the spin-(1)/(2) triangular-lattice nearest-neighbor Heisenberg antiferromagnet above a three-sublattice ordered ground state. Several striking features are observed compared to the classical (large-S) spin-wave spectra. Whereas, at low energies the dispersion is only weakly renormalized by quantum fluctuations, significant anomalies are observed at high energies. In particular, we find rotonlike minima at special wave vectors and strong downward renormalization in large parts of the Brillouin zone, leading to very flat or dispersionless modes. We present detailed comparison of our calculated excitation energies in the Brillouin zone with the spin-wave dispersion to order 1/S calculated recently by Starykh, Chubukov, and Abanov [Phys. Rev. B74, 180403(R) (2006)]. We find many common features but also some quantitative and qualitative differences. We show that at temperatures as low as 0.1J the thermally excited rotons make a significant contribution to the entropy. Consequently, unlike for the square lattice model, a nonlinear sigma model description of the finite-temperature properties is only applicable at temperatures < 0.1J. Finally, we review recent NMR measurements on the organic compound kappa-(BEDT-TTF)(2)Cu-2(CN)(3). We argue that these are inconsistent with long-range order and a description of the low-energy excitations in terms of interacting magnons, and that therefore a Heisenberg model with only nearest-neighbor exchange does not offer an adequate description of this material.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A generic method for the estimation of parameters for Stochastic Ordinary Differential Equations (SODEs) is introduced and developed. This algorithm, called the GePERs method, utilises a genetic optimisation algorithm to minimise a stochastic objective function based on the Kolmogorov-Smirnov statistic. Numerical simulations are utilised to form the KS statistic. Further, the examination of some of the factors that improve the precision of the estimates is conducted. This method is used to estimate parameters of diffusion equations and jump-diffusion equations. It is also applied to the problem of model selection for the Queensland electricity market. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of mutualisms in contributing to species invasions is rarely considered, inhibiting effective risk analysis and management options. Potential ecological consequences of invasion of non-native pollinators include increased pollination and seed set of invasive plants, with subsequent impacts on population growth rates and rates of spread. We outline a quantitative approach for evaluating the impact of a proposed introduction of an invasive pollinator on existing weed population dynamics and demonstrate the use of this approach on a relatively data-rich case study: the impacts on Cytisus scoparius (Scotch broom) from proposed introduction of Bombus terrestris. Three models have been used to assess population growth (matrix model), spread speed (integrodifference equation), and equilibrium occupancy (lattice model) for C. scoparius. We use available demographic data for an Australian population to parameterize two of these models. Increased seed set due to more efficient pollination resulted in a higher population growth rate in the density-independent matrix model, whereas simulations of enhanced pollination scenarios had a negligible effect on equilibrium weed occupancy in the lattice model. This is attributed to strong microsite limitation of recruitment in invasive C. scoparius populations observed in Australia and incorporated in the lattice model. A lack of information regarding secondary ant dispersal of C. scoparius prevents us from parameterizing the integrodifference equation model for Australia, but studies of invasive populations in California suggest that spread speed will also increase with higher seed set. For microsite-limited C. scoparius populations, increased seed set has minimal effects on equilibrium site occupancy. However, for density-independent rapidly invading populations, increased seed set is likely to lead to higher growth rates and spread speeds. The impacts of introduced pollinators on native flora and fauna and the potential for promoting range expansion in pollinator-limited 'sleeper weeds' also remain substantial risks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use series expansions to study the excitation spectra of spin-1/2 antiferromagnets on anisotropic triangular lattices. For the isotropic triangular lattice model (TLM), the high-energy spectra show several anomalous features that differ strongly from linear spin-wave theory (LSWT). Even in the Neel phase, the deviations from LSWT increase sharply with frustration, leading to rotonlike minima at special wave vectors. We argue that these results can be interpreted naturally in a spinon language and provide an explanation for the previously observed anomalous finite-temperature properties of the TLM. In the coupled-chains limit, quantum renormalizations strongly enhance the one-dimensionality of the spectra, in agreement with experiments on Cs2CuCl4.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The particle-based lattice solid model developed to study the physics of rocks and the nonlinear dynamics of earthquakes is refined by incorporating intrinsic friction between particles. The model provides a means for studying the causes of seismic wave attenuation, as well as frictional heat generation, fault zone evolution, and localisation phenomena. A modified velocity-Verlat scheme that allows friction to be precisely modelled is developed. This is a difficult computational problem given that a discontinuity must be accurately simulated by the numerical approach (i.e., the transition from static to dynamical frictional behaviour). This is achieved using a half time step integration scheme. At each half time step, a nonlinear system is solved to compute the static frictional forces and states of touching particle-pairs. Improved efficiency is achieved by adaptively adjusting the time step increment, depending on the particle velocities in the system. The total energy is calculated and verified to remain constant to a high precision during simulations. Numerical experiments show that the model can be applied to the study of earthquake dynamics, the stick-slip instability, heat generation, and fault zone evolution. Such experiments may lead to a conclusive resolution of the heat flow paradox and improved understanding of earthquake precursory phenomena and dynamics. (C) 1999 Academic Press.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Statistical tests of Load-Unload Response Ratio (LURR) signals are carried in order to verify statistical robustness of the previous studies using the Lattice Solid Model (MORA et al., 2002b). In each case 24 groups of samples with the same macroscopic parameters (tidal perturbation amplitude A, period T and tectonic loading rate k) but different particle arrangements are employed. Results of uni-axial compression experiments show that before the normalized time of catastrophic failure, the ensemble average LURR value rises significantly, in agreement with the observations of high LURR prior to the large earthquakes. In shearing tests, two parameters are found to control the correlation between earthquake occurrence and tidal stress. One is, A/(kT) controlling the phase shift between the peak seismicity rate and the peak amplitude of the perturbation stress. With an increase of this parameter, the phase shift is found to decrease. Another parameter, AT/k, controls the height of the probability density function (Pdf) of modeled seismicity. As this parameter increases, the Pdf becomes sharper and narrower, indicating a strong triggering. Statistical studies of LURR signals in shearing tests also suggest that except in strong triggering cases, where LURR cannot be calculated due to poor data in unloading cycles, the larger events are more likely to occur in higher LURR periods than the smaller ones, supporting the LURR hypothesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Lattice Solid Model has been used successfully as a virtual laboratory to simulate fracturing of rocks, the dynamics of faults, earthquakes and gouge processes. However, results from those simulations show that in order to make the next step towards more realistic experiments it will be necessary to use models containing a significantly larger number of particles than current models. Thus, those simulations will require a greatly increased amount of computational resources. Whereas the computing power provided by single processors can be expected to increase according to Moore's law, i.e., to double every 18-24 months, parallel computers can provide significantly larger computing power today. In order to make this computing power available for the simulation of the microphysics of earthquakes, a parallel version of the Lattice Solid Model has been implemented. Benchmarks using large models with several millions of particles have shown that the parallel implementation of the Lattice Solid Model can achieve a high parallel-efficiency of about 80% for large numbers of processors on different computer architectures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this article we study the effects of adsorbed phase compression, lattice structure, and pore size distribution on the analysis of adsorption in microporous activated carbon. The lattice gas approach of Ono-Kondo is modified to account for the above effects. Data of nitrogen adsorption at 77 K onto a number of activated carbon samples are analyzed to investigate the pore filling pressure versus pore width, the packing effect, and the compression of the adsorbed phase. It is found that the PSDs obtained from this analysis are comparable to those obtained by the DFT method. The discrete nature of the PSDs derived from the modified lattice gas theory is due to the inherent assumption of discrete layers of molecules. Nevertheless, it does provide interesting information on the evolution of micropores during the activation process.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this study, 3-D Lattice Solid Model (LSMearth or LSM) was extended by introducing particle-scale rotation. In the new model, for each 3-D particle, we introduce six degrees of freedom: Three for translational motion, and three for orientation. Six kinds of relative motions are permitted between two neighboring particles, and six interactions are transferred, i.e., radial, two shearing forces, twisting and two bending torques. By using quaternion algebra, relative rotation between two particles is decomposed into two sequence-independent rotations such that all interactions due to the relative motions between interactive rigid bodies can be uniquely decided. After incorporating this mechanism and introducing bond breaking under torsion and bending into the LSM, several tests on 2-D and 3-D rock failure under uni-axial compression are carried out. Compared with the simulations without the single particle rotational mechanism, the new simulation results match more closely experimental results of rock fracture and hence, are encouraging. Since more parameters are introduced, an approach for choosing the new parameters is presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.