187 resultados para O-lattice Theory


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A comparative study of carbon gasification with O-2 and CO2 was conducted by using density functional theory calculations. It was found that the activation energy and the number of active sites in carbon gasification reactions are significantly affected by both the capacity and manner of gas chemisorption. O-2 has a strong adsorption capacity and the dissociative chemisorption of O-2 is thermodynamically favorable on either bare carbon surface or even isolated edge sites. As a result, a large number of semiquinone and o-quinone oxygen can be formed indicating a significant increase in the number of active sites. Moreover, the weaker o-quinone C-C bonds can also drive the reaction forward at (ca. 30%) lower activation energy. Epoxy oxygen forms under relatively high O-2 pressure, and it can only increase the number of active sites, not further reduce the activation energy. CO2 has a lower adsorption capacity. Dissociative chemisorption of CO2 can only occur on two consecutive edge sites and o-quinone oxygen formed from CO2 chemisorption is negligible, let alone epoxy oxygen. Therefore, CO2-carbon reaction needs (ca 30%) higher activation energy. Furthermore, the effective active sites are also reduced by the manner Of CO2 chemisorption. A combination of the higher activation energy and the fewer active sites leads to the much lower reaction rate Of CO2-carbon.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has been argued that power-law time-to-failure fits for cumulative Benioff strain and an evolution in size-frequency statistics in the lead-up to large earthquakes are evidence that the crust behaves as a Critical Point (CP) system. If so, intermediate-term earthquake prediction is possible. However, this hypothesis has not been proven. If the crust does behave as a CP system, stress correlation lengths should grow in the lead-up to large events through the action of small to moderate ruptures and drop sharply once a large event occurs. However this evolution in stress correlation lengths cannot be observed directly. Here we show, using the lattice solid model to describe discontinuous elasto-dynamic systems subjected to shear and compression, that it is for possible correlation lengths to exhibit CP-type evolution. In the case of a granular system subjected to shear, this evolution occurs in the lead-up to the largest event and is accompanied by an increasing rate of moderate-sized events and power-law acceleration of Benioff strain release. In the case of an intact sample system subjected to compression, the evolution occurs only after a mature fracture system has developed. The results support the existence of a physical mechanism for intermediate-term earthquake forecasting and suggest this mechanism is fault-system dependent. This offers an explanation of why accelerating Benioff strain release is not observed prior to all large earthquakes. The results prove the existence of an underlying evolution in discontinuous elasto-dynamic, systems which is capable of providing a basis for forecasting catastrophic failure and earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In order to understand the earthquake nucleation process, we need to understand the effective frictional behavior of faults with complex geometry and fault gouge zones. One important aspect of this is the interaction between the friction law governing the behavior of the fault on the microscopic level and the resulting macroscopic behavior of the fault zone. Numerical simulations offer a possibility to investigate the behavior of faults on many different scales and thus provide a means to gain insight into fault zone dynamics on scales which are not accessible to laboratory experiments. Numerical experiments have been performed to investigate the influence of the geometric configuration of faults with a rate- and state-dependent friction at the particle contacts on the effective frictional behavior of these faults. The numerical experiments are designed to be similar to laboratory experiments by DIETERICH and KILGORE (1994) in which a slide-hold-slide cycle was performed between two blocks of material and the resulting peak friction was plotted vs. holding time. Simulations with a flat fault without a fault gouge have been performed to verify the implementation. These have shown close agreement with comparable laboratory experiments. The simulations performed with a fault containing fault gouge have demonstrated a strong dependence of the critical slip distance D-c on the roughness of the fault surfaces and are in qualitative agreement with laboratory experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents a fairness theory-based conceptual framework for studying and managing consumers’ emotions during service recovery attempts. The conceptual framework highlights the central role played by counterfactual thinking and accountability. Findings from five focus groups are also presented to lend further support to the conceptual framework. Essentially, the article argues that a service failure event triggers an emotional response in the consumer, and from here the consumer commences an assessment of the situation, considering procedural justice, interactional justice, and distributive justice elements, while engaging in counterfactual thinking and apportioning accountability. More specifically, the customer assesses whether the service provider could and should have done something more to remedy the problem and how the customer would have felt had these actions been taken. The authors argue that during this process situational effort is taken into account when assessing accountability. When service providers do not appear to exhibit an appropriate level of effort, consumers attribute this to the service provider not caring. This in turn leads to the customer feeling more negative emotions, such as anger and frustration. Managerial implications of the study are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Today, the standard approach for the kinetic analysis of dynamic PET studies is compartment models, in which the tracer and its metabolites are confined to a few well-mixed compartments. We examine whether the standard model is suitable for modern PET data or whether theories including more physiologic realism can advance the interpretation of dynamic PET data. A more detailed microvascular theory is developed for intravascular tracers in single-capillary and multiple-capillary systems. The microvascular models, which account for concentration gradients in capillaries, are validated and compared with the standard model in a pig liver study. Methods: Eight pigs underwent a 5-min dynamic PET study after O-15-carbon monoxide inhalation. Throughout each experiment, hepatic arterial blood and portal venous blood were sampled, and flow was measured with transit-time flow meters. The hepatic dual-inlet concentration was calculated as the flow-weighted inlet concentration. Dynamic PET data were analyzed with a traditional single-compartment model and 2 microvascular models. Results: Microvascular models provided a better fit of the tissue activity of an intravascular tracer than did the compartment model. In particular, the early dynamic phase after a tracer bolus injection was much improved. The regional hepatic blood flow estimates provided by the microvascular models (1.3 +/- 0.3 mL min(-1) mL(-1) for the single-capillary model and 1.14 +/- 0.14 min(-1) mL(-1) for the multiple-capillary model) (mean +/- SEM mL of blood min(-1) mL of liver tissue(-1)) were in agreement with the total blood flow measured by flow meters and normalized to liver weight (1.03 +/- 0.12 mL min(-1) mL(-1)). Conclusion: Compared with the standard compartment model, the 2 microvascular models provide a superior description of tissue activity after an intravascular tracer bolus injection. The microvascular models include only parameters with a clear-cut physiologic interpretation and are applicable to capillary beds in any organ. In this study, the microvascular models were validated for the liver and provided quantitative regional flow estimates in agreement with flow measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling physiological processes using tracer kinetic methods requires knowledge of the time course of the tracer concentration in blood supplying the organ. For liver studies, however, inaccessibility of the portal vein makes direct measurement of the hepatic dual-input function impossible in humans. We want to develop a method to predict the portal venous time-activity curve from measurements of an arterial time-activity curve. An impulse-response function based on a continuous distribution of washout constants is developed and validated for the gut. Experiments with simultaneous blood sampling in aorta and portal vein were made in 13 anesthetized pigs following inhalation of intravascular [O-15] CO or injections of diffusible 3-O[ C-11] methylglucose (MG). The parameters of the impulse-response function have a physiological interpretation in terms of the distribution of washout constants and are mathematically equivalent to the mean transit time ( T) and standard deviation of transit times. The results include estimates of mean transit times from the aorta to the portal vein in pigs: (T) over bar = 0.35 +/- 0.05 min for CO and 1.7 +/- 0.1 min for MG. The prediction of the portal venous time-activity curve benefits from constraining the regression fits by parameters estimated independently. This is strong evidence for the physiological relevance of the impulse-response function, which includes asymptotically, and thereby justifies kinetically, a useful and simple power law. Similarity between our parameter estimates in pigs and parameter estimates in normal humans suggests that the proposed model can be adapted for use in humans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sensitivity of output of a linear operator to its input can be quantified in various ways. In Control Theory, the input is usually interpreted as disturbance and the output is to be minimized in some sense. In stochastic worst-case design settings, the disturbance is considered random with imprecisely known probability distribution. The prior set of probability measures can be chosen so as to quantify how far the disturbance deviates from the white-noise hypothesis of Linear Quadratic Gaussian control. Such deviation can be measured by the minimal Kullback-Leibler informational divergence from the Gaussian distributions with zero mean and scalar covariance matrices. The resulting anisotropy functional is defined for finite power random vectors. Originally, anisotropy was introduced for directionally generic random vectors as the relative entropy of the normalized vector with respect to the uniform distribution on the unit sphere. The associated a-anisotropic norm of a matrix is then its maximum root mean square or average energy gain with respect to finite power or directionally generic inputs whose anisotropy is bounded above by a≥0. We give a systematic comparison of the anisotropy functionals and the associated norms. These are considered for unboundedly growing fragments of homogeneous Gaussian random fields on multidimensional integer lattice to yield mean anisotropy. Correspondingly, the anisotropic norms of finite matrices are extended to bounded linear translation invariant operators over such fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Motivated by application of current superalgebras in the study of disordered systems such as the random XY and Dirac models, we investigate gl(2\2) current superalgebra at general level k. We construct its free field representation and corresponding Sugawara energy-momentum tensor in the non-standard basis. Three screen currents of the first kind are also presented. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Co-sintering aid has been added to Ce1.9Gd0.1O1.95 (CGO) by treating a commercial powder with Co(NO3)(2) (COCGO), X-ray diffraction (XRD) measurements of lattice parameter indicated that the Co was located on the CGO particle surface after calcination at 650 degreesC. After heat treatment at temperatures above 650 degreesC, the room temperature lattice parameter of CGO was found to increase, indicating redistribution of the Gd. Compared to CGO, the lattice parameter of CGO + 2 cation% Co (2CoCGO) was lower for a given temperature (650-1100 degreesC), A.C. impedance revealed that the lattice conductivity of 2CoCGO was enhanced when densified at lower temperatures, Transmission electron microscopy (TEM) showed that, even after sintering for 4 h at 980 degreesC, most of the Co was located at grain boundaries. (C) 2002 Published by Elsevier Science B.V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we examine the effects of varying several experimental parameters in the Kane quantum computer architecture: A-gate voltage, the qubit depth below the silicon oxide barrier, and the back gate depth to explore how these variables affect the electron density of the donor electron. In particular, we calculate the resonance frequency of the donor nuclei as a function of these parameters. To do this we calculated the donor electron wave function variationally using an effective-mass Hamiltonian approach, using a basis of deformed hydrogenic orbitals. This approach was then extended to include the electric-field Hamiltonian and the silicon host geometry. We found that the phosphorous donor electron wave function was very sensitive to all the experimental variables studied in our work, and thus to optimize the operation of these devices it is necessary to control all parameters varied in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reports a study that explored a new construct: climate of fear. We hypothesised that climate of fear would vary across work sites within organisations, but not across organisations. This is in contrast a to measures of organisational culture, which were expected to vary both within and across organisations. To test our hypotheses, we developed a new 13-item measure of perceived fear in organisations and tested it in 20 sites across two organisations (N = 209). Culture variables measured were innovative leadership culture, and communication culture. Results were that climate of fear did vary across sites in both organisations, while differences across organisations were not significant, as we anticipated. Organisational culture, however, varied between the organisations, and within one of the organisations. The climate of fear scale exhibited acceptable psychometric properties.