943 resultados para Oldroyd 8-constant model
Resumo:
Inelastic neutron scattering spectroscopy has been used to observe and characterise hydrogen on the carbon component of a Pt/C catalyst. INS provides the complete vibration spectrum of coronene, regarded as a molecular model of a graphite layer. The vibrational modes are assigned with the aid of ab initio density functional theory calculations and the INS spectra by the a-CLIMAX program. A spectrum for which the H modes of coronene have been computationally suppressed, a carbon-only coronene spectrum, is a better representation of the spectrum of a graphite layer than is coronene itself. Dihydrogen dosing of a Pt/C catalyst caused amplification of the surface modes of carbon, an effect described as H riding on carbon. From the enhancement of the low energy carbon modes (100-600 cm(-1)) it is concluded that spillover hydrogen becomes attached to dangling bonds at the edges of graphitic regions of the carbon support. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Commonly used repair rate models for repairable systems in the reliability literature are renewal processes, generalised renewal processes or non-homogeneous Poisson processes. In addition to these models, geometric processes (GP) are studied occasionally. The GP, however, can only model systems with monotonously changing (increasing, decreasing or constant) failure intensities. This paper deals with the reliability modelling of failure processes for repairable systems where the failure intensity shows a bathtub-type non-monotonic behaviour. A new stochastic process, i.e. an extended Poisson process, is introduced in this paper. Reliability indices are presented, and the parameters of the new process are estimated. Experimental results on a data set demonstrate the validity of the new process.
Resumo:
The basic repair rate models for repairable systems may be homogeneous Poisson processes, renewal processes or nonhomogeneous Poisson processes. In addition to these models, geometric processes are studied occasionally. Geometric processes, however, can only model systems with monotonously changing (increasing, decreasing or constant) failure intensity. This paper deals with the reliability modelling of the failure process of repairable systems when the failure intensity shows a bathtub type non-monotonic behaviour. A new stochastic process, an extended Poisson process, is introduced. Reliability indices and parameter estimation are presented. A comparison of this model with other repair models based on a dataset is made.
Resumo:
Individuals with elevated levels of plasma low density lipoprotein (LDL) cholesterol (LDL-C) are considered to be at risk of developing coronary heart disease. LDL particles are removed from the blood by a process known as receptor-mediated endocytosis, which occurs mainly in the liver. A series of classical experiments delineated the major steps in the endocytotic process; apolipoprotein B-100 present on LDL particles binds to a specific receptor (LDL receptor, LDL-R) in specialized areas of the cell surface called clathrin-coated pits. The pit comprising the LDL-LDL-R complex is internalized forming a cytoplasmic endosome. Fusion of the endosome with a lysosome leads to degradation of the LDL into its constituent parts (that is, cholesterol, fatty acids, and amino acids), which are released for reuse by the cell, or are excreted. In this paper, we formulate a mathematical model of LDL endocytosis, consisting of a system of ordinary differential equations. We validate our model against existing in vitro experimental data, and we use it to explore differences in system behavior when a single bolus of extracellular LDL is supplied to cells, compared to when a continuous supply of LDL particles is available. Whereas the former situation is common to in vitro experimental systems, the latter better reflects the in vivo situation. We use asymptotic analysis and numerical simulations to study the longtime behavior of model solutions. The implications of model-derived insights for experimental design are discussed.
Resumo:
Our objective in this study was to develop and implement an effective intervention strategy to manipulate the amount and composition of dietary fat and carbohydrate (CHO) in free-living individuals in the RISCK study. The study was a randomized, controlled dietary intervention study that was conducted in 720 participants identified as higher risk for or with metabolic syndrome. All followed a 4-wk run-in reference diet [high saturated fatty acids (SF)/high glycemic index (GI)]. Volunteers were randomized to continue this diet for a further 24 wk or to I of 4 isoenergetic prescriptions [high monounsaturated fatty acids (MUFA)/high GI; high MUFA/low GI; low fat (LF)/high GI; and LF/low GI]. We developed a food exchange model to implement each diet. Dietary records and plasma phospholipid fatty acids were used to assess the effectiveness of the intervention strategy. Reported fat intake from the LF diets was significantly reduced to 28% of energy (%E) compared with 38% E from the HM and LF diets. SF intake was successfully decreased in the HM and LF diets was similar to 10% E compared with 17% E in the reference diet (P = 0.001). Dietary MUFA in the HIM diets was similar to 17% E, significantly higher than in the reference (12% E) and LF diets (10% E) (P = 0.001). Changes in plasma phospholipid fatty acids provided further evidence for the successful manipulation of fat intake. The GI of the HGI and LGI arms differed by similar to 9 points (P = 0.001). The food exchange model provided an effective dietary strategy for the design and implementation across multiple sites of 5 experimental diets with specific targets for the proportion of fat and CHO. J. Nutr. 139: 1534-1540, 2009.
Resumo:
A new primary model based on a thermodynamically consistent first-order kinetic approach was constructed to describe non-log-linear inactivation kinetics of pressure-treated bacteria. The model assumes a first-order process in which the specific inactivation rate changes inversely with the square root of time. The model gave reasonable fits to experimental data over six to seven orders of magnitude. It was also tested on 138 published data sets and provided good fits in about 70% of cases in which the shape of the curve followed the typical convex upward form. In the remainder of published examples, curves contained additional shoulder regions or extended tail regions. Curves with shoulders could be accommodated by including an additional time delay parameter and curves with tails shoulders could be accommodated by omitting points in the tail beyond the point at which survival levels remained more or less constant. The model parameters varied regularly with pressure, which may reflect a genuine mechanistic basis for the model. This property also allowed the calculation of (a) parameters analogous to the decimal reduction time D and z, the temperature increase needed to change the D value by a factor of 10, in thermal processing, and hence the processing conditions needed to attain a desired level of inactivation; and (b) the apparent thermodynamic volumes of activation associated with the lethal events. The hypothesis that inactivation rates changed as a function of the square root of time would be consistent with a diffusion-limited process.
Resumo:
OBJECTIVE: To compare insulin sensitivity (Si) from a frequently sampled intravenous glucose tolerance test (FSIGT) and subsequent minimal model analyses with surrogate measures of insulin sensitivity and resistance and to compare features of the metabolic syndrome between Caucasians and Indian Asians living in the UK. SUBJECTS: In all, 27 healthy male volunteers (14 UK Caucasians and 13 UK Indian Asians), with a mean age of 51.2 +/- 1.5 y, BMI of 25.8 +/- 0.6 kg/m(2) and Si of 2.85 +/- 0.37. MEASUREMENTS: Si was determined from an FSIGT with subsequent minimal model analysis. The concentrations of insulin, glucose and nonesterified fatty acids (NEFA) were analysed in fasting plasma and used to calculate surrogate measure of insulin sensitivity (quantitative insulin sensitivity check index (QUICKI), revised QUICKI) and resistance (homeostasis for insulin resistance (HOMA IR), fasting insulin resistance index (FIRI), Bennetts index, fasting insulin, insulin-to-glucose ratio). Plasma concentrations of triacylglycerol (TAG), total cholesterol, high density cholesterol, (HDL-C) and low density cholesterol, (LDL-C) were also measured in the fasted state. Anthropometric measurements were conducted to determine body-fat distribution. RESULTS: Correlation analysis identified the strongest relationship between Si and the revised QUICKI (r = 0.67; P = 0.000). Significant associations were also observed between Si and QUICKI (r = 0.51; P = 0.007), HOMA IR (r = -0.50; P = 0.009), FIRI and fasting insulin. The Indian Asian group had lower HDL-C (P = 0.001), a higher waist-hip ratio (P = 0.01) and were significantly less insulin sensitive (Si) than the Caucasian group (P = 0.02). CONCLUSION: The revised QUICKI demonstrated a statistically strong relationship with the minimal model. However, it was unable to differentiate between insulin-sensitive and -resistant groups in this study. Future larger studies in population groups with varying degrees of insulin sensitivity are recommended to investigate the general applicability of the revised QUICKI surrogate technique.
Resumo:
Sunflower oil-in-water emulsions containing TBHQ, caffeic acid, epigallocatechin gallate (EGCG), or 6-hydroxy-2,5,7,8-tetramethylchroman-2-carboxylic acid (Trolox), both with and without BSA, were stored at 50 and 30degreesC. Oxidation of the oil was monitored by determination of the PV, conjugated diene content, and hexanal formation. Emulsions containing EGCG, caffeic acid, and, to a lesser extent, Trolox were much more stable during storage in the presence of BSA than in its absence even though BSA itself did not provide an antioxidant effect. BSA did not have a synergistic effect on the antioxidant activity of TBHQ. The BSA structure changed, with a considerable loss of fluorescent tryptophan groups during storage of solutions containing BSA and antioxidants, and a BSA-antioxidant adduct with radical-scavenging activity was formed. The highest radical-scavenging activity observed was for the isolated protein from a sample containing EGCG and BSA incubated at 30degreesC for 10 d. This fraction contained unchanged BSA as well as BSA-antioxidant adduct, but 95.7% of the initial fluorescence had been lost, showing that most of the BSA had been altered. It can be concluded that BSA exerts its synergistic effect with antioxidants because of formation of a protein-antioxidant adduct during storage, which is concentrated at the oil-water interface owing to the surface-active nature of the protein.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
The Cambridge Tropospheric Trajectory model of Chemistry and Transport (CiTTyCAT), a Lagrangian chemistry model, has been evaluated using atmospheric chemical measurements collected during the East Atlantic Summer Experiment 1996 (EASE '96). This field campaign was part of the UK Natural Environment Research Council's (NERC) Atmospheric Chemistry Studies in the Oceanic Environment (ACSOE) programme, conducted at Mace Head, Republic of Ireland, during July and August 1996. The model includes a description of gas-phase tropospheric chemistry, and simple parameterisations for surface deposition, mixing from the free troposphere and emissions. The model generally compares well with the measurements and is used to study the production and loss of O3 under a variety of conditions. The mean difference between the hourly O3 concentrations calculated by the model and those measured is 0.6 ppbv with a standard deviation of 8.7 ppbv. Three specific air-flow regimes were identified during the campaign – westerly, anticyclonic (easterly) and south westerly. The westerly flow is typical of background conditions for Mace Head. However, on some occasions there was evidence of long-range transport of pollutants from North America. In periods of anticyclonic flow, air parcels had collected emissions of NOx and VOCs immediately before arriving at Mace Head, leading to O3 production. The level of calculated O3 depends critically on the precise details of the trajectory, and hence on the emissions into the air parcel. In several periods of south westerly flow, low concentrations of O3 were measured which were consistent with deposition and photochemical destruction inside the tropical marine boundary layer.
Resumo:
We present a novel kinetic multi-layer model that explicitly resolves mass transport and chemical reaction at the surface and in the bulk of aerosol particles (KM-SUB). The model is based on the PRA framework of gas-particle interactions (Poschl-Rudich-Ammann, 2007), and it includes reversible adsorption, surface reactions and surface-bulk exchange as well as bulk diffusion and reaction. Unlike earlier models, KM-SUB does not require simplifying assumptions about steady-state conditions and radial mixing. The temporal evolution and concentration profiles of volatile and non-volatile species at the gas-particle interface and in the particle bulk can be modeled along with surface concentrations and gas uptake coefficients. In this study we explore and exemplify the effects of bulk diffusion on the rate of reactive gas uptake for a simple reference system, the ozonolysis of oleic acid particles, in comparison to experimental data and earlier model studies. We demonstrate how KM-SUB can be used to interpret and analyze experimental data from laboratory studies, and how the results can be extrapolated to atmospheric conditions. In particular, we show how interfacial and bulk transport, i.e., surface accommodation, bulk accommodation and bulk diffusion, influence the kinetics of the chemical reaction. Sensitivity studies suggest that in fine air particulate matter oleic acid and compounds with similar reactivity against ozone (carbon-carbon double bonds) can reach chemical lifetimes of many hours only if they are embedded in a (semi-)solid matrix with very low diffusion coefficients (< 10(-10) cm(2) s(-1)). Depending on the complexity of the investigated system, unlimited numbers of volatile and non-volatile species and chemical reactions can be flexibly added and treated with KM-SUB. We propose and intend to pursue the application of KM-SUB as a basis for the development of a detailed master mechanism of aerosol chemistry as well as for the derivation of simplified but realistic parameterizations for large-scale atmospheric and climate models.
Resumo:
The first measurement of the relative permittivity (εr) and loss tangent (tan δ) of EPON™ SU-8 advanced thick film ultraviolet photoresist is reported at frequencies between 75–110 GHz (W-band). The problems associated with such a measurement are discussed, an error analysis given, and values of εr=1.725±0.08 and tanδ =0.02±0.001 are determined.
Resumo:
In this chapter we described how the inclusion of a model of a human arm, combined with the measurement of its neural input and a predictor, can provide to a previously proposed teleoperator design robustness under time delay. Our trials gave clear indications of the superiority of the NPT scheme over traditional as well as the modified Yokokohji and Yoshikawa architectures. Its fundamental advantages are: the time-lead of the slave, the more efficient, and providing a more natural feeling manipulation, and the fact that incorporating an operator arm model leads to more credible stability results. Finally, its simplicity allows less likely to fail local control techniques to be employed. However, a significant advantage for the enhanced Yokokohji and Yoshikawa architecture results from the very fact that it’s a conservative modification of current designs. Under large prediction errors, it can provide robustness through directing the master and slave states to their means and, since it relies on the passivity of the mechanical part of the system, it would not confuse the operator. An experimental implementation of the techniques will provide further evidence for the performance of the proposed architectures. The employment of neural networks and fuzzy logic, which will provide an adaptive model of the human arm and robustifying control terms, is scheduled for the near future.
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.