50 resultados para one-meson-exchange: independent-particle shell model
Resumo:
A new completely integrable model of strongly correlated electrons is proposed which describes two competitive interactions: one is the correlated one-particle hopping, the other is the Hubbard-like interaction. The integrability follows from the fact that the Hamiltonian is derivable from a one-parameter family of commuting transfer matrices. The Bethe ansatz equations are derived by algebraic Bethe ansatz method.
Resumo:
The Izergin-Korepin model on a semi-infinite lattice is diagonalized by using the level-one vertex operators of the twisted quantum affine algebra U-q[((2))(2)]. We give the bosonization of the vacuum state with zero particle content. Excitation states are given by the action of the vertex operators on the vacuum state. We derive the boundary S-matrix. We give an integral expression of the correlation functions of the boundary model, and derive the difference equations which they satisfy. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
A generalised model for the prediction of single char particle gasification dynamics, accounting for multi-component mass transfer with chemical reaction, heat transfer, as well as structure evolution and peripheral fragmentation is developed in this paper. Maxwell-Stefan analysis is uniquely applied to both micro and macropores within the framework of the dusty-gas model to account for the bidisperse nature of the char, which differs significantly from the conventional models that are based on a single pore type. The peripheral fragmentation and random-pore correlation incorporated into the model enable prediction of structure/reactivity relationships. The occurrence of chemical reaction within the boundary layer reported by Biggs and Agarwal (Chem. Eng. Sci. 52 (1997) 941) has been confirmed through an analysis of CO/CO2 product ratio obtained from model simulations. However, it is also quantitatively observed that the significance of boundary layer reaction reduces notably with the reduction of oxygen concentration in the flue gas, operational pressure and film thickness. Computations have also shown that in the presence of diffusional gradients peripheral fragmentation occurs in the early stages on the surface, after which conversion quickens significantly due to small particle size. Results of the early commencement of peripheral fragmentation at relatively low overall conversion obtained from a large number of simulations agree well with experimental observations reported by Feng and Bhatia (Energy & Fuels 14 (2000) 297). Comprehensive analysis of simulation results is carried out based on well accepted physical principles to rationalise model prediction. (C) 2001 Elsevier Science Ltd. AH rights reserved.
Resumo:
Form factors are derived for a model describing the coherent Josephson tunneling between two coupled Bose-Einstein condensates. This is achieved by studying the exact solution of the model within the framework of the algebraic Bethe ansatz. In this approach the form factors are expressed through determinant representations which are functions of the roots of the Bethe ansatz equations.
Resumo:
A new integrable model which is a variant of the one-dimensional Hubbard model is proposed. The integrability of the model is verified by presenting the associated quantum R-matrix which satisfies the Yang-Baxter equation. We argue that the new model possesses the SO(4) algebra symmetry, which contains a representation of the eta-pairing SU(2) algebra and a spin SU(2) algebra. Additionally, the algebraic Bethe ansatz is studied by means of the quantum inverse scattering method. The spectrum of the Hamiltonian, eigenvectors, as well as the Bethe ansatz equations, are discussed. (C) 2002 American Institute of Physics.
Resumo:
We study, with exact diagonalization, the zero temperature properties of the quarter-filled extended Hubbard model on a square lattice. We find that increasing the ratio of the intersite Coulomb repulsion, V, to the bandwidth drives the system from a metal to a charge ordered insulator. The evolution of the optical conductivity spectrum with increasing V is in agreement with the observed optical conductivity of several layered molecular crystals with the theta and beta crystal structures.
Resumo:
The performance of the Oxford University Gun Tunnel has been estimated using a quasi-one-dimensional simulation of the facility gas dynamics. The modelling of the actual facility area variations so as to adequately simulate both shock reflection and flow discharge processes has been considered in some detail. Test gas stagnation pressure and temperature histories are compared with measurements at two different operating conditions - one with nitrogen and the other with carbon dioxide as the test gas. It is demonstrated that both the simulated pressures and temperatures are typically within 3% of the experimental measurements.
Resumo:
In order to understand the earthquake nucleation process, we need to understand the effective frictional behavior of faults with complex geometry and fault gouge zones. One important aspect of this is the interaction between the friction law governing the behavior of the fault on the microscopic level and the resulting macroscopic behavior of the fault zone. Numerical simulations offer a possibility to investigate the behavior of faults on many different scales and thus provide a means to gain insight into fault zone dynamics on scales which are not accessible to laboratory experiments. Numerical experiments have been performed to investigate the influence of the geometric configuration of faults with a rate- and state-dependent friction at the particle contacts on the effective frictional behavior of these faults. The numerical experiments are designed to be similar to laboratory experiments by DIETERICH and KILGORE (1994) in which a slide-hold-slide cycle was performed between two blocks of material and the resulting peak friction was plotted vs. holding time. Simulations with a flat fault without a fault gouge have been performed to verify the implementation. These have shown close agreement with comparable laboratory experiments. The simulations performed with a fault containing fault gouge have demonstrated a strong dependence of the critical slip distance D-c on the roughness of the fault surfaces and are in qualitative agreement with laboratory experiments.
Resumo:
Modeling physiological processes using tracer kinetic methods requires knowledge of the time course of the tracer concentration in blood supplying the organ. For liver studies, however, inaccessibility of the portal vein makes direct measurement of the hepatic dual-input function impossible in humans. We want to develop a method to predict the portal venous time-activity curve from measurements of an arterial time-activity curve. An impulse-response function based on a continuous distribution of washout constants is developed and validated for the gut. Experiments with simultaneous blood sampling in aorta and portal vein were made in 13 anesthetized pigs following inhalation of intravascular [O-15] CO or injections of diffusible 3-O[ C-11] methylglucose (MG). The parameters of the impulse-response function have a physiological interpretation in terms of the distribution of washout constants and are mathematically equivalent to the mean transit time ( T) and standard deviation of transit times. The results include estimates of mean transit times from the aorta to the portal vein in pigs: (T) over bar = 0.35 +/- 0.05 min for CO and 1.7 +/- 0.1 min for MG. The prediction of the portal venous time-activity curve benefits from constraining the regression fits by parameters estimated independently. This is strong evidence for the physiological relevance of the impulse-response function, which includes asymptotically, and thereby justifies kinetically, a useful and simple power law. Similarity between our parameter estimates in pigs and parameter estimates in normal humans suggests that the proposed model can be adapted for use in humans.
Resumo:
Mineral processing plants use two main processes; these are comminution and separation. The objective of the comminution process is to break complex particles consisting of numerous minerals into smaller simpler particles where individual particles consist primarily of only one mineral. The process in which the mineral composition distribution in particles changes due to breakage is called 'liberation'. The purpose of separation is to separate particles consisting of valuable mineral from those containing nonvaluable mineral. The energy required to break particles to fine sizes is expensive, and therefore the mineral processing engineer must design the circuit so that the breakage of liberated particles is reduced in favour of breaking composite particles. In order to effectively optimize a circuit through simulation it is necessary to predict how the mineral composition distributions change due to comminution. Such a model is called a 'liberation model for comminution'. It was generally considered that such a model should incorporate information about the ore, such as the texture. However, the relationship between the feed and product particles can be estimated using a probability method, with the probability being defined as the probability that a feed particle of a particular composition and size will form a particular product particle of a particular size and composition. The model is based on maximizing the entropy of the probability subject to mass constraints and composition constraint. Not only does this methodology allow a liberation model to be developed for binary particles, but also for particles consisting of many minerals. Results from applying the model to real plant ore are presented. A laboratory ball mill was used to break particles. The results from this experiment were used to estimate the kernel which represents the relationship between parent and progeny particles. A second feed, consisting primarily of heavy particles subsampled from the main ore was then ground through the same mill. The results from the first experiment were used to predict the product of the second experiment. The agreement between the predicted results and the actual results are very good. It is therefore recommended that more extensive validation is needed to fully evaluate the substance of the method. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
Statistical tests of Load-Unload Response Ratio (LURR) signals are carried in order to verify statistical robustness of the previous studies using the Lattice Solid Model (MORA et al., 2002b). In each case 24 groups of samples with the same macroscopic parameters (tidal perturbation amplitude A, period T and tectonic loading rate k) but different particle arrangements are employed. Results of uni-axial compression experiments show that before the normalized time of catastrophic failure, the ensemble average LURR value rises significantly, in agreement with the observations of high LURR prior to the large earthquakes. In shearing tests, two parameters are found to control the correlation between earthquake occurrence and tidal stress. One is, A/(kT) controlling the phase shift between the peak seismicity rate and the peak amplitude of the perturbation stress. With an increase of this parameter, the phase shift is found to decrease. Another parameter, AT/k, controls the height of the probability density function (Pdf) of modeled seismicity. As this parameter increases, the Pdf becomes sharper and narrower, indicating a strong triggering. Statistical studies of LURR signals in shearing tests also suggest that except in strong triggering cases, where LURR cannot be calculated due to poor data in unloading cycles, the larger events are more likely to occur in higher LURR periods than the smaller ones, supporting the LURR hypothesis.
Resumo:
When studying genotype X environment interaction in multi-environment trials, plant breeders and geneticists often consider one of the effects, environments or genotypes, to be fixed and the other to be random. However, there are two main formulations for variance component estimation for the mixed model situation, referred to as the unconstrained-parameters (UP) and constrained-parameters (CP) formulations. These formulations give different estimates of genetic correlation and heritability as well as different tests of significance for the random effects factor. The definition of main effects and interactions and the consequences of such definitions should be clearly understood, and the selected formulation should be consistent for both fixed and random effects. A discussion of the practical outcomes of using the two formulations in the analysis of balanced data from multi-environment trials is presented. It is recommended that the CP formulation be used because of the meaning of its parameters and the corresponding variance components. When managed (fixed) environments are considered, users will have more confidence in prediction for them but will not be overconfident in prediction in the target (random) environments. Genetic gain (predicted response to selection in the target environments from the managed environments) is independent of formulation.
Resumo:
Living radical polymerization has allowed complex polymer architectures to be synthesized in bulk, solution, and water. The most versatile of these techniques is reversible addition-fragmentation chain transfer (RAFT), which allows a wide range of functional and nonfunctional polymers to be made with predictable molecular weight distributions (MWDs), ranging from very narrow to quite broad. The great complexity of the RAFT mechanism and how the kinetic parameters affect the rate of polymerization and MWD are not obvious. Therefore, the aim of this article is to provide useful insights into the important kinetic parameters that control the rate of polymerization and the evolution of the MWD with conversion. We discuss how a change in the chain-transfer constant can affect the evolution of the MWD. It is shown how we can, in principle, use only one RAFT agent to obtain a poly-mer with any MWD. Retardation and inhibition are discussed in terms of (1) the leaving R group reactivity and (2) the intermediate radical termination model versus the slow fragmentation model. (c) 2005 Wiley Periodicals, Inc.
Resumo:
The effect of antiferromagnetic spin fluctuations on two-dimensional quarter-filled systems is studied theoretically. An effective t-J(')-V model on a square lattice which accounts for checkerboard charge fluctuations and next-nearest-neighbor antiferromagnetic spin fluctuations is considered. From calculations based on large-N theory on this model it is found that the exchange interaction J(') increases the attraction between electrons in the d(xy) channel only, so that both charge and spin fluctuations work cooperatively to produce d(xy) pairing.
Resumo:
In cell lifespan studies the exponential nature of cell survival curves is often interpreted as showing the rate of death is independent of the age of the cells within the population. Here we present an alternative model where cells that die are replaced and the age and lifespan of the population pool is monitored until a, steady state is reached. In our model newly generated individual cells are given a determined lifespan drawn from a number of known distributions including the lognormal, which is frequently found in nature. For lognormal lifespans the analytic steady-state survival curve obtained can be well-fit by a single or double exponential, depending on the mean and standard deviation. Thus, experimental evidence for exponential lifespans of one and/or two populations cannot be taken as definitive evidence for time and age independence of cell survival. A related model for a dividing population in steady state is also developed. We propose that the common adoption of age-independent, constant rates of change in biological modelling may be responsible for significant errors, both of interpretation and of mathematical deduction. We suggest that additional mathematical and experimental methods must be used to resolve the relationship between time and behavioural changes by cells that are predominantly unsynchronized.