238 resultados para Fixed smeared crack model
em University of Queensland eSpace - Australia
Resumo:
We investigate the internal dynamics of two cellular automaton models with heterogeneous strength fields and differing nearest neighbour laws. One model is a crack-like automaton, transferring ail stress from a rupture zone to the surroundings. The other automaton is a partial stress drop automaton, transferring only a fraction of the stress within a rupture zone to the surroundings. To study evolution of stress, the mean spectral density. f(k(r)) of a stress deficit held is: examined prior to, and immediately following ruptures in both models. Both models display a power-law relationship between f(k(r)) and spatial wavenumber (k(r)) of the form f(k(r)) similar tok(r)(-beta). In the crack model, the evolution of stress deficit is consistent with cyclic approach to, and retreat from a critical state in which large events occur. The approach to criticality is driven by tectonic loading. Short-range stress transfer in the model does not affect the approach to criticality of broad regions in the model. The evolution of stress deficit in the partial stress drop model is consistent with small fluctuations about a mean state of high stress, behaviour indicative of a self-organised critical system. Despite statistics similar to natural earthquakes these simplified models lack a physical basis. physically motivated models of earthquakes also display dynamical complexity similar to that of a critical point system. Studies of dynamical complexity in physical models of earthquakes may lead to advancement towards a physical theory for earthquakes.
Resumo:
The evolution of event time and size statistics in two heterogeneous cellular automaton models of earthquake behavior are studied and compared to the evolution of these quantities during observed periods of accelerating seismic energy release Drier to large earthquakes. The two automata have different nearest neighbor laws, one of which produces self-organized critical (SOC) behavior (PSD model) and the other which produces quasi-periodic large events (crack model). In the PSD model periods of accelerating energy release before large events are rare. In the crack model, many large events are preceded by periods of accelerating energy release. When compared to randomized event catalogs, accelerating energy release before large events occurs more often than random in the crack model but less often than random in the PSD model; it is easier to tell the crack and PSD model results apart from each other than to tell either model apart from a random catalog. The evolution of event sizes during the accelerating energy release sequences in all models is compared to that of observed sequences. The accelerating energy release sequences in the crack model consist of an increase in the rate of events of all sizes, consistent with observations from a small number of natural cases, however inconsistent with a larger number of cases in which there is an increase in the rate of only moderate-sized events. On average, no increase in the rate of events of any size is seen before large events in the PSD model.
Resumo:
Fixed-point roundoff noise in digital implementation of linear systems arises due to overflow, quantization of coefficients and input signals, and arithmetical errors. In uniform white-noise models, the last two types of roundoff errors are regarded as uniformly distributed independent random vectors on cubes of suitable size. For input signal quantization errors, the heuristic model is justified by a quantization theorem, which cannot be directly applied to arithmetical errors due to the complicated input-dependence of errors. The complete uniform white-noise model is shown to be valid in the sense of weak convergence of probabilistic measures as the lattice step tends to zero if the matrices of realization of the system in the state space satisfy certain nonresonance conditions and the finite-dimensional distributions of the input signal are absolutely continuous.
Resumo:
The quantitative description of the quantum entanglement between a qubit and its environment is considered. Specifically, for the ground state of the spin-boson model, the entropy of entanglement of the spin is calculated as a function of α, the strength of the ohmic coupling to the environment, and ɛ, the level asymmetry. This is done by a numerical renormalization group treatment of the related anisotropic Kondo model. For ɛ=0, the entanglement increases monotonically with α, until it becomes maximal for α→1-. For fixed ɛ>0, the entanglement is a maximum as a function of α for a value, α=αM
Resumo:
Superconducting pairing of electrons in nanoscale metallic particles with discrete energy levels and a fixed number of electrons is described by the reduced Bardeen, Cooper, and Schrieffer model Hamiltonian. We show that this model is integrable by the algebraic Bethe ansatz. The eigenstates, spectrum, conserved operators, integrals of motion, and norms of wave functions are obtained. Furthermore, the quantum inverse problem is solved, meaning that form factors and correlation functions can be explicitly evaluated. Closed form expressions are given for the form factors and correlation functions that describe superconducting pairing.
Resumo:
The long performance of an isothermal fixed bed reactor undergoing catalyst poisoning is theoretically analyzed using the dispersion model. First order reaction with dth order deactivation is assumed and the model equations are solved by matched asymptotic expansions for large Peclet number. Simple closed-form solutions, uniformly valid in time, are obtained.
Resumo:
Modeling volcanic phenomena is complicated by free-surfaces often supporting large rheological gradients. Analytical solutions and analogue models provide explanations for fundamental characteristics of lava flows. But more sophisticated models are needed, incorporating improved physics and rheology to capture realistic events. To advance our understanding of the flow dynamics of highly viscous lava in Peléean lava dome formation, axi-symmetrical Finite Element Method (FEM) models of generic endogenous dome growth have been developed. We use a novel technique, the level-set method, which tracks a moving interface, leaving the mesh unaltered. The model equations are formulated in an Eulerian framework. In this paper we test the quality of this technique in our numerical scheme by considering existing analytical and experimental models of lava dome growth which assume a constant Newtonian viscosity. We then compare our model against analytical solutions for real lava domes extruded on Soufrière, St. Vincent, W.I. in 1979 and Mount St. Helens, USA in October 1980 using an effective viscosity. The level-set method is found to be computationally light and robust enough to model the free-surface of a growing lava dome. Also, by modeling the extruded lava with a constant pressure head this naturally results in a drop in extrusion rate with increasing dome height, which can explain lava dome growth observables more appropriately than when using a fixed extrusion rate. From the modeling point of view, the level-set method will ultimately provide an opportunity to capture more of the physics while benefiting from the numerical robustness of regular grids.
Resumo:
Crack tip strain maps have been measured for AISI 4340 high strength steel. No significant creep was observed. The measured values of CTOD were greater than expected from the HRR model. Crack tip branching was observed in every experiment. The direction of crack branching was in the same direction as a major ridge'' of epsilon(yy) strain, which in turn was in the same direction as predicted by the HRR model. Furthermore, the measured magnitudes of the epsilon(y)y strain in this same direction were in general greater than the values predicted by the HRR model. This indicates more plasticity in the crack tip region than expected from the HRR model. This greater plasticity could be related to the larger than expected CTOD values. The following discrepancies between the measured strain fields for AISI 4340 and the HRR predictions are noteworthy: (1) The crack branching. (2) Values of CTOD significantly higher than predicted by HRR. (3) The major ridge'' of epsilon(yy) strain an angle of about 60 degrees with the direction of overall propagation of the fatigue precrack, in which the measured magnitudes of the epsilon(yy) strain were greater than the values predicted by the HRR model. (4) Asymmetric shape of the plastic zone as measured by the epsilon(yy) strain. (5) Values of shear strain gamma(xy) significantly higher than predicted by the HRR model. (C) 1999 Kluwer Academic Publishers.
Resumo:
This paper studied the influence of hydrogen and water vapour environments on the plastic behaviour in the vicinity of the crack tip for AISI 4340. Hydrogen and water vapour (at a pressure of 15 Torr) significantly increased the crack tip opening displacement. The crack tip strain distribution in 15 Torr hydrogen was significantly different to that measured in vacuum. In the presence of sufficient hydrogen, the plastic zone was larger, was elongated in the direction of crack propagation and moreover there was significant creep. These observations support the hydrogen enhanced localised plasticity model for hydrogen embrittlement in this steel. The strain distribution in the presence of water vapour also suggests that SCC in AISI 4340 occurs via the hydrogen enhanced localised plasticity mechanism. (C) 1999 Kluwer Academic Publishers.
Resumo:
The concept of local concurrence is used to quantify the entanglement between a single qubit and the remainder of a multiqubit system. For the ground state of the BCS model in the thermodynamic limit the set of local concurrences completely describes the entanglement. As a measure for the entanglement of the full system we investigate the average local concurrence (ALC). We find that the ALC satisfies a simple relation with the order parameter. We then show that for finite systems with a fixed particle number, a relation between the ALC and the condensation energy exposes a threshold coupling. Below the threshold, entanglement measures besides the ALC are significant.
Resumo:
We describe the twisted affine superalgebra sl(2\2)((2)) and its quantized version U-q[sl(2\2)((2))]. We investigate the tensor product representation of the four-dimensional grade star representation for the fixed-point sub superalgebra U-q[osp(2\2)]. We work out the tensor product decomposition explicitly and find that the decomposition is not completely reducible. Associated with this four-dimensional grade star representation we derive two U-q[osp(2\2)] invariant R-matrices: one of them corresponds to U-q [sl(2\2)(2)] and the other to U-q [osp(2\2)((1))]. Using the R-matrix for U-q[sl(2\2)((2))], we construct a new U-q[osp(2\2)] invariant strongly correlated electronic model, which is integrable in one dimension. Interestingly this model reduces in the q = 1 limit, to the one proposed by Essler et al which has a larger sl(2\2) symmetry.
Resumo:
The use of cell numbers rather than mass to quantify the size of the biotic phase in animal cell cultures causes several problems. First, the cell size varies with growth conditions, thus yields expressed in terms of cell numbers cannot be used in the normal mass balance sense. Second, experience from microbial systems shows that cell number dynamics lag behind biomass dynamics. This work demonstrates that this lag phenomenon also occurs in animal cell culture. Both the lag phenomenon and the variation in cell size are explained using a simple model of the cell cycle. The basis for the model is that onset of DNA synthesis requires accumulation of G1 cyclins to a prescribed level. This requirement is translated into a requirement for a cell to reach a critical size before commencement of DNA synthesis. A slower gl-owing cell will spend more time in G1 before reaching the critical mass. In contrast, the period between onset of DNA synthesis and mitosis, tau(B), is fixed. The two parameters in the model, the critical size and tau(B), were determined from eight steady-state measurements of mean cell size in a continuous hybridoma culture. Using these parameters, it was possible to predict with reasonable accuracy the transient behavior in a separate shift-up culture, i.e., a culture where cells were transferred from a lean environment to a rich environment. The implications for analyzing experimental data for animal cell culture are discussed. (C) 1997 John Wiley & Sons, Inc.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.
Resumo:
The purpose of this study, was to develop a newborn piglet model of hypoxia/ischaemia which would better emulate the clinical situation in the asphyxiated human neonate and produce a consistent degree of histopathological injury following the insult. One-day-old piglets (n = 18) were anaesthetised with a mixture of propofol (10 mg/kg/h) and alfentinal (5,5.5 mug/kg/h) i.v. The piglets were intubated and ventilated. Physiological variables were monitored continuously. Hypoxia was induced by decreasing the inspired oxygen (FiO(2)) to 3-4% and adjusting FiO(2) to maintain the cerebral function monitor peak amplitude at less than or equal to5 muV. The duration of the mild insult was 20, min while the severe insult was 30 min which included 10 min where the blood pressure was allowed to fall below 70% of baseline. Control piglets (n=4 of 18) were subjected to the same protocol except for the hypoxic/ischaemic insult. The piglets were allowed to recover from anaesthesia then euthanased 72 It after the insult. The brains were perfusion-fixed, removed and embedded in paraffin. Coronal sections were stained by haematoxylin/eosin. A blinded observer examined the frontal and parietal cortex, hippocampus, basal ganglia, thalamus and cerebellum for the degree of damage. The total mean histology score for the five areas of the brain for the severe insult was 15.6 +/-4.4 (mean +/-S.D., n=7), whereas no damage was seen in either the mild insult (n=4) or control groups. This 'severe damage' model produces a consistent level of damage and will prove useful for examining potential neuroprotective therapies in the neonatal brain. (C) 2001 Elsevier Science BY. All rights reserved.