996 resultados para lattice Boltzmann method
Resumo:
This study compared an enzyme-linked immunosorbent assay (ELISA) to a liquid chromatography-tandem mass spectrometry (LC/MS/MS) technique for measurement of tacrolimus concentrations in adult kidney and liver transplant recipients, and investigated how assay choice influenced pharmacokinetic parameter estimates and drug dosage decisions. Tacrolimus concentrations measured by both ELISA and LC/MS/MS from 29 kidney (n = 98 samples) and 27 liver (n = 97 samples) transplant recipients were used to evaluate the performance of these methods in the clinical setting. Tacrolimus concentrations measured by the two techniques were compared via regression analysis. Population pharmacokinetic models were developed independently using ELISA and LC/MS/MS data from 76 kidney recipients. Derived kinetic parameters were used to formulate typical dosing regimens for concentration targeting. Dosage recommendations for the two assays were compared. The relation between LC/MS/MS and ELISA measurements was best described by the regression equation ELISA = 1.02 . (LC/MS/MS) + 0.14 in kidney recipients, and ELISA = 1.12 . (LC/MS/MS) - 0.87 in liver recipients. ELISA displayed less accuracy than LC/MS/MS at lower tacrolimus concentrations. Population pharmacokinetic models based on ELISA and LC/MS/MS data were similar with residual random errors of 4.1 ng/mL and 3.7 ng/mL, respectively. Assay choice gave rise to dosage prediction differences ranging from 0% to 30%. ELISA measurements of tacrolimus are not automatically interchangeable with LC/MS/MS values. Assay differences were greatest in adult liver recipients, probably reflecting periods of liver dysfunction and impaired biliary secretion of metabolites. While the majority of data collected in this study suggested assay differences in adult kidney recipients were minimal, findings of ELISA dosage underpredictions of up to 25% in the long term must be investigated further.
Resumo:
A soft linguistic evaluation method is proposed for the environmental assessment of physical infrastructure projects based on fuzzy relations. Infrastructure projects are characterized in terms of linguistic expressions of 'performance' with respect to factors or impacts and the 'importance' of those factors/impacts. A simple example is developed to illustrate the method in the context of three road infrastructure projects assessed against five factors/impacts. In addition, a means to include hard or crisp factors is presented and illustrated with respect to a sixth factor.
Resumo:
The aim of this study was to develop and trial a method to monitor the evolution of clinical reasoning in a PBL curriculum that is suitable for use in a large medical school. Termed Clinical Reasoning Problems (CRPs), it is based on the notion that clinical reasoning is dependent on the identification and correct interpretation of certain critical clinical features. Each problem consists of a clinical scenario comprising presentation, history and physical examination. Based on this information, subjects are asked to nominate the two most likely diagnoses and to list the clinical features that they considered in formulating their diagnoses, indicating whether these features supported or opposed the nominated diagnoses. Students at different levels of medical training completed a set of 10 CRPs as well as the Diagnostic Thinking Inventory, a self-reporting questionnaire designed to assess reasoning style. Responses were scored against those of a reference group of general practitioners. Results indicate that the CRPs are an easily administered, reliable and valid assessment of clinical reasoning, able to successfully monitor its development throughout medical training. Consequently, they can be employed to assess clinical reasoning skill in individual students and to evaluate the success of undergraduate medical schools in providing effective tuition in clinical reasoning.
Resumo:
We propose a new method to investigate the thermal properties of QCD with a small quark chemical potential mu. Derivatives of quark and gluonic observables with respect to mu are computed at mu=0 for two flavors of p4 improved staggered fermions with ma=0.1,0.2 on a 16(3)x4 lattice, and used to calculate the leading order Taylor expansion in mu of the location of the pseudocritical point about mu=0. This expansion should be well behaved for the small values of mu(q)/T(c)similar to0.1 relevant for BNL RHIC phenomenology, and predicts a critical curve T-c(mu) in reasonable agreement with estimates obtained using exact reweighting. In addition, we contrast the case of isoscalar and isovector chemical potentials, quantify the effect of munot equal0 on the equation of state, and comment on the complex phase of the fermion determinant in QCD with munot equal0.
Resumo:
A detailed analysis procedure is described for evaluating rates of volumetric change in brain structures based on structural magnetic resonance (MR) images. In this procedure, a series of image processing tools have been employed to address the problems encountered in measuring rates of change based on structural MR images. These tools include an algorithm for intensity non-uniforniity correction, a robust algorithm for three-dimensional image registration with sub-voxel precision and an algorithm for brain tissue segmentation. However, a unique feature in the procedure is the use of a fractional volume model that has been developed to provide a quantitative measure for the partial volume effect. With this model, the fractional constituent tissue volumes are evaluated for voxels at the tissue boundary that manifest partial volume effect, thus allowing tissue boundaries be defined at a sub-voxel level and in an automated fashion. Validation studies are presented on key algorithms including segmentation and registration. An overall assessment of the method is provided through the evaluation of the rates of brain atrophy in a group of normal elderly subjects for which the rate of brain atrophy due to normal aging is predictably small. An application of the method is given in Part 11 where the rates of brain atrophy in various brain regions are studied in relation to normal aging and Alzheimer's disease. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
The emphasis of this work is on the optimal design of MRI magnets with both superconducting coils and ferromagnetic rings. The work is directed to the automated design of MRI magnet systems containing superconducting wire and both `cold' and `warm' iron. Details of the optimization procedure are given and the results show combined superconducting and iron material MRI magnets with excellent field characteristics. Strong, homogeneous central magnetic fields are produced with little stray or external field leakage. The field calculations are performed using a semi-analytical method for both current coil and iron material sources. Design examples for symmetric, open and asymmetric clinical MRI magnets containing both superconducting coils and ferromagnetic material are presented.
Resumo:
Numerical modeling of the eddy currents induced in the human body by the pulsed field gradients in MRI presents a difficult computational problem. It requires an efficient and accurate computational method for high spatial resolution analyses with a relatively low input frequency. In this article, a new technique is described which allows the finite difference time domain (FDTD) method to be efficiently applied over a very large frequency range, including low frequencies. This is not the case in conventional FDTD-based methods. A method of implementing streamline gradients in FDTD is presented, as well as comparative analyses which show that the correct source injection in the FDTD simulation plays a crucial rule in obtaining accurate solutions. In particular, making use of the derivative of the input source waveform is shown to provide distinct benefits in accuracy over direct source injection. In the method, no alterations to the properties of either the source or the transmission media are required. The method is essentially frequency independent and the source injection method has been verified against examples with analytical solutions. Results are presented showing the spatial distribution of gradient-induced electric fields and eddy currents in a complete body model.
Resumo:
A thermodynamic approach is developed in this paper to describe the behavior of a subcritical fluid in the neighborhood of vapor-liquid interface and close to a graphite surface. The fluid is modeled as a system of parallel molecular layers. The Helmholtz free energy of the fluid is expressed as the sum of the intrinsic Helmholtz free energies of separate layers and the potential energy of their mutual interactions calculated by the 10-4 potential. This Helmholtz free energy is described by an equation of state (such as the Bender or Peng-Robinson equation), which allows us a convenient means to obtain the intrinsic Helmholtz free energy of each molecular layer as a function of its two-dimensional density. All molecular layers of the bulk fluid are in mechanical equilibrium corresponding to the minimum of the total potential energy. In the case of adsorption the external potential exerted by the graphite layers is added to the free energy. The state of the interface zone between the liquid and the vapor phases or the state of the adsorbed phase is determined by the minimum of the grand potential. In the case of phase equilibrium the approach leads to the distribution of density and pressure over the transition zone. The interrelation between the collision diameter and the potential well depth was determined by the surface tension. It was shown that the distance between neighboring molecular layers substantially changes in the vapor-liquid transition zone and in the adsorbed phase with loading. The approach is considered in this paper for the case of adsorption of argon and nitrogen on carbon black. In both cases an excellent agreement with the experimental data was achieved without additional assumptions and fitting parameters, except for the fluid-solid potential well depth. The approach has far-reaching consequences and can be readily extended to the model of adsorption in slit pores of carbonaceous materials and to the analysis of multicomponent adsorption systems. (C) 2002 Elsevier Science (USA).
Resumo:
In this paper the diffusion and flow of carbon tetrachloride, benzene and n-hexane through a commercial activated carbon is studied by a differential permeation method. The range of pressure is covered from very low pressure to a pressure range where significant capillary condensation occurs. Helium as a non-adsorbing gas is used to determine the characteristics of the porous medium. For adsorbing gases and vapors, the motion of adsorbed molecules in small pores gives rise to a sharp increase in permeability at very low pressures. The interplay between a decreasing behavior in permeability due to the saturation of small pores with adsorbed molecules and an increasing behavior due to viscous flow in larger pores with pressure could lead to a minimum in the plot of total permeability versus pressure. This phenomenon is observed for n-hexane at 30degreesC. At relative pressure of 0.1-0.8 where the gaseous viscous flow dominates, the permeability is a linear function of pressure. Since activated carbon has a wide pore size distribution, the mobility mechanism of these adsorbed molecules is different from pore to pore. In very small pores where adsorbate molecules fill the pore the permeability decreases with an increase in pressure, while in intermediate pores the permeability of such transport increases with pressure due to the increasing build-up of layers of adsorbed molecules. For even larger pores, the transport is mostly due to diffusion and flow of free molecules, which gives rise to linear permeability with respect to pressure. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Anew thermodynamic approach has been developed in this paper to analyze adsorption in slitlike pores. The equilibrium is described by two thermodynamic conditions: the Helmholtz free energy must be minimal, and the grand potential functional at that minimum must be negative. This approach has led to local isotherms that describe adsorption in the form of a single layer or two layers near the pore walls. In narrow pores local isotherms have one step that could be either very sharp but continuous or discontinuous benchlike for a definite range of pore width. The latter reflects a so-called 0 --> 1 monolayer transition. In relatively wide pores, local isotherms have two steps, of which the first step corresponds to the appearance of two layers near the pore walls, while the second step corresponds to the filling of the space between these layers. All features of local isotherms are in agreement with the results obtained from the density functional theory and Monte Carlo simulations. The approach is used for determining pore size distributions of carbon materials. We illustrate this with the benzene adsorption data on activated carbon at 20, 50, and 80 degreesC, argon adsorption on activated carbon Norit ROX at 87.3 K, and nitrogen adsorption on activated carbon Norit R1 at 77.3 K.
Resumo:
In this paper we analyzed the adsorption of gases and vapors on graphitised thermal carbon black by using a modified DFT-lattice theory, in which we assume that the behavior of the first layer in the adsorption film is different from those of second and higher layers. The effects of various parameters on the topology of the adsorption isotherm were first investigated, and the model was then applied in the analysis of adsorption data of numerous substances on carbon black. We have found that the first layer in the adsorption film behaves differently from the second and higher layers in such a way that the adsorbate-adsorbate interaction energy in the first layer is less than that of second and higher layers, and the same is observed for the partition function. Furthermore, the adsorbate-adsorbate and adsorbate-adsorbent interaction energies obtained from the fitting are consistently lower than the corresponding values obtained from the viscosity data and calculated from the Lorentz-Berthelot rule, respectively.
Resumo:
It has been argued that power-law time-to-failure fits for cumulative Benioff strain and an evolution in size-frequency statistics in the lead-up to large earthquakes are evidence that the crust behaves as a Critical Point (CP) system. If so, intermediate-term earthquake prediction is possible. However, this hypothesis has not been proven. If the crust does behave as a CP system, stress correlation lengths should grow in the lead-up to large events through the action of small to moderate ruptures and drop sharply once a large event occurs. However this evolution in stress correlation lengths cannot be observed directly. Here we show, using the lattice solid model to describe discontinuous elasto-dynamic systems subjected to shear and compression, that it is for possible correlation lengths to exhibit CP-type evolution. In the case of a granular system subjected to shear, this evolution occurs in the lead-up to the largest event and is accompanied by an increasing rate of moderate-sized events and power-law acceleration of Benioff strain release. In the case of an intact sample system subjected to compression, the evolution occurs only after a mature fracture system has developed. The results support the existence of a physical mechanism for intermediate-term earthquake forecasting and suggest this mechanism is fault-system dependent. This offers an explanation of why accelerating Benioff strain release is not observed prior to all large earthquakes. The results prove the existence of an underlying evolution in discontinuous elasto-dynamic, systems which is capable of providing a basis for forecasting catastrophic failure and earthquakes.
Resumo:
In order to understand the earthquake nucleation process, we need to understand the effective frictional behavior of faults with complex geometry and fault gouge zones. One important aspect of this is the interaction between the friction law governing the behavior of the fault on the microscopic level and the resulting macroscopic behavior of the fault zone. Numerical simulations offer a possibility to investigate the behavior of faults on many different scales and thus provide a means to gain insight into fault zone dynamics on scales which are not accessible to laboratory experiments. Numerical experiments have been performed to investigate the influence of the geometric configuration of faults with a rate- and state-dependent friction at the particle contacts on the effective frictional behavior of these faults. The numerical experiments are designed to be similar to laboratory experiments by DIETERICH and KILGORE (1994) in which a slide-hold-slide cycle was performed between two blocks of material and the resulting peak friction was plotted vs. holding time. Simulations with a flat fault without a fault gouge have been performed to verify the implementation. These have shown close agreement with comparable laboratory experiments. The simulations performed with a fault containing fault gouge have demonstrated a strong dependence of the critical slip distance D-c on the roughness of the fault surfaces and are in qualitative agreement with laboratory experiments.
Resumo:
Many large-scale stochastic systems, such as telecommunications networks, can be modelled using a continuous-time Markov chain. However, it is frequently the case that a satisfactory analysis of their time-dependent, or even equilibrium, behaviour is impossible. In this paper, we propose a new method of analyzing Markovian models, whereby the existing transition structure is replaced by a more amenable one. Using rates of transition given by the equilibrium expected rates of the corresponding transitions of the original chain, we are able to approximate its behaviour. We present two formulations of the idea of expected rates. The first provides a method for analysing time-dependent behaviour, while the second provides a highly accurate means of analysing equilibrium behaviour. We shall illustrate our approach with reference to a variety of models, giving particular attention to queueing and loss networks. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
A supersweet sweet corn hybrid, Pacific H5, was planted at weekly intervals (P-1 to P-5) in spring in South-Eastern Queensland. All plantings were harvested at the same time resulting in immature seed for the last planting (P-5). The seed was handled by three methods: manual harvest and processing (M-1), manual harvest and mechanical processing (M-2) and mechanical harvest and processing (M-3), and later graded into three sizes (small, medium and large). After eight months storage at 12-14degreesC, seed was maintained at 30degreesC with bimonthly monitoring of germination for fourteen months and seed damage at the end of this period. Seed quality was greatest for M-1 and was reduced by mechanical processing but not by mechanical harvesting. Large and medium seed had higher germination due to greater storage reserves but also more seed damage during mechanical processing. Immature seed from premature harvest (P-5) had poor quality especially when processed mechanically and reinforced the need for harvested seed to be physiologically mature.