961 resultados para Out-Steady-State Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been shown that P auxiliary subunits increase current amplitude in voltage-dependent calcium channels. In this study, however, we found a hovel inhibitory effect of beta3 Subunit on macroscopic Ba2+ currents through recombinant N- and R-type calcium channels expressed in Xenopus oocytes. Overexpressed beta3 (12.5 ng/ cell cRNA) significantly suppressed N- and R-type, but not L-type, calcium channel currents at physiological holding potentials (HPs) of -60 and -80 mV At a HP of -80 mV, coinjection of various concentrations (0-12.5 ng) of the beta3 with Ca,.2.2alpha(1) and alpha(2)delta enhanced the maximum conductance of expressed channels at lower beta3 concentrations but at higher concentrations (>2.5 ng/cell) caused a marked inhibition. The beta3-induced Current suppression was reversed at a HP of - 120 mV, suggesting that the inhibition was voltage dependent. A high concentration of Ba-2divided by (40 mM) as a charge carrier also largely diminished the effect of P3 at -80 mV Therefore, experimental conditions (HP, divalent cation concentration, and P3 subunit concentration) approaching normal physiological conditions were critical to elucidate the full extent of this novel P3 effect. Steady-state inactivation curves revealed that N-type channels exhibited closed-state inactivation without P3, and that P3 caused an similar to40 mV negative shift of the inactivation, producing a second component with an inactivation midpoint of approximately -85 mV The inactivation of N-type channels in the presence of a high concentration (12.5 ng/cell) of P3 developed slowly and the time-dependent inactivation curve was best fit by the sum of two exponential functions with time constants of 14 s and 8.8 min at -80 mV Similar ultra-slow inactivation was observed for N-type channels Without P3. Thus, P3 can have a profound negative regulatory effect on N-type (and also R-type) calcium channels by Causing a hyperpolarizing shift of the inactivation without affecting ultra-slow and closed-state inactivation properties.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims To investigate the concentration-effect relationship and pharmacokinetics of leflunomide in patients with rheumatoid arthritis (RA). Methods Data were collected from 23 RA patients on leflunomide therapy (as sole disease modifying antirheumatic drug (DMARD)) for at least 3 months. Main measures were A77 1726 (active metabolite of leflunomide) plasma concentrations and disease activity measures including pain, duration/intensity of morning stiffness, and SF-36 survey. A population estimate was sought for apparent clearance (CL/F ) and volume of distribution was fixed (0.155 l kg(-1)). Factors screened for influence on CL/F were weight, age, gender and estimated creatinine clearance. Results Significantly higher A77 1726 concentrations were seen in patients with less swollen joints and with higher SF-36 mental summary scores than in those with measures indicating more active disease (P < 0.05); concentration-effect trends were seen with five other disease activity measures. Statistical analysis of all disease activity measures showed that mean A77 1726 concentrations in groups with greater control of disease activity were significantly higher than those in whom the measures indicated less desirable control (P < 0.05). There was large between subject variability in the dose-concentration relationship. A steady-state infusion model best described the pharmacokinetic data. Inclusion of age as a covariate decreased interindividual variability (P < 0.01), but this would not be clinically important in terms of dosage changes. Final parameter estimate (% CV interindividual variability) for CL/F was 0.0184 l h(-1) (50%) (95% CI 0.0146, 0.0222). Residual (unexplained) variability (% CV) was 8.5%. Conclusions This study of leflunomide in patients using the drug clinically indicated a concentration-effect relationship. From our data, a plasma A77 1726 concentration of 50 mg l(-1) is more likely to indicate someone with less active disease than is a concentration around 30 mg l(-1). The marked variability in pharmacokinetics suggests a place for individualized dosing of leflunomide in RA therapy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Defining the pharmacokinetics of drugs in overdose is complicated. Deliberate self-poisoning is generally impulsive and associated with poor accuracy in dose history. In addition, early blood samples are rarely collected to characterize the whole plasma-concentration time profile and the effect of decontamination on the pharmacokinetics is uncertain. The aim of this study was to explore a fully Bayesian methodology for population pharmacokinetic analysis of data that arose from deliberate self-poisoning with citalopram. Prior information on the pharmacokinetic parameters was elicited from 14 published studies on citalopram when taken in therapeutic doses. The data set included concentration-time data from 53 patients studied after 63 citalopram overdose events (dose range: 20-1700 mg). Activated charcoal was administered between 0.5 and 4 h after 17 overdose events. The clinical investigator graded the veracity of the patients' dosing history on a 5-point ordinal scale. Inclusion of informative priors stabilised the pharmacokinetic model and the population mean values could be estimated well. There were no indications of non-linear clearance after excessive doses. The final model included an estimated uncertainty of the dose amount which in a simulation study was shown to not affect the model's ability to characterise the effects of activated charcoal. The effect of activated charcoal on clearance and bioavailability was pronounced and resulted in a 72% increase and 22% decrease, respectively. These findings suggest charcoal administration is potentially beneficial after citalopram overdose. The methodology explored seems promising for exploring the dose-exposure relationship in the toxicological settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A system of cascaded qubits interacting via the one-way exchange of photons is studied. While for general operating conditions the system evolves to a superposition of Bell states (a dark state) in the long-time limit, under a particular resonance condition no steady state is reached within a finite time. We analyze the conditional quantum evolution (quantum trajectories) to characterize the asymptotic behavior under this resonance condition. A distinct bimodality is observed: for perfect qubit coupling, the system either evolves to a maximally entangled Bell state without emitting photons (the dark state) or executes a sustained entangled-state cycle-random switching between a pair of Bell states while emitting a continuous photon stream; for imperfect coupling, two entangled-state cycles coexist, between which a random selection is made from one quantum trajectory to another.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To assess whether trends in mortality from heart failure(HF) in Australia are due to a change in awareness of the condition or real changes in its epidemiology. Methods We carried out a retrospective analysis of official data on national mortality data between 1997 and 2003. A death was attributed to HF if the death certificate mentioned HF as either the underlying cause of death (UCD) or among the contributory factors. Findings From a total of 907 242 deaths, heart failure was coded as the UCD for 29 341 (3.2%) and was mentioned anywhere on the death certificate in 135 268 (14.9%). Between 1997 and 2003, there were decreases in the absolute numbers of deaths and in the age-specific and age-standardized mortality rates for HF either as UCD or mentioned anywhere for both sexes. HF was mentioned for 24.6% and 17.8% of deaths attributed to ischaemic heart disease and circulatory disease, respectively, and these proportions remained unchanged over the period of study. In addition, HF as UCD accounted for 8.3% of deaths attributed to circulatory disease and this did not change materially from 1997 to 2003. Conclusion The decline in mortality from HF measured as either number of deaths or rate probably reflects a real change in the epidemiology of HF. Population-based studies are required to determine accurately the contributions of changes in incidence, survival and demographic factors to the evolving epidemiology of HF.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Standard upward-burning promoted ignition tests (“Standard Test Method for Determining the Combustion Behavior of Metallic Materials in Oxygen-Enriched Atmospheres,” ASTM G4-124 [1] or “Flammability, Odor, Offgassing, and Compatibility Requirements and Test Procedures for Materials in Environments that Support Combustion,” NASA-STD-6001, NASA Test 17 [2]) were performed on cylindrical iron (99.95% pure) rods in various oxygen purities (95.0–99.98%) in reduced gravity onboard NASA JSC's KC-135 to investigate the effect of gravity on the regression rate of the melting interface. Visual analysis of experiments agrees with previous published observations showing distinct motions of the molten mass attached to the solid rod during testing. Using an ultrasonic technique to record the real-time rod length, comparison of the instantaneous regression rate of the melting interface and visual recording shows a non-steady-state regression rate of the melting interface for the duration of a test. Precessional motion is associated with a higher regression rate of the melting interface than for test periods in which the molten mass does not show lateral motion. The transition between the two types of molten mass motion during a test was accompanied by a reduced regression rate of the melting interface, approximately 15–50% of the average regression rate of the melting interface for the entire test.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Measuring Job Openings: Evidence from Swedish Plant Level Data. In modern macroeconomic models “job openings'' are a key component. Thus, when taking these models to the data we need an empirical counterpart to the theoretical concept of job openings. To achieve this, the literature relies on job vacancies measured either in survey or register data. Insofar as this concept captures the concept of job openings well we should see a tight relationship between vacancies and subsequent hires on the micro level. To investigate this, I analyze a new data set of Swedish hires and job vacancies on the plant level covering the period 2001-2012. I find that vacancies contain little power in predicting hires over and above (i) whether the number of vacancies is positive and (ii) plant size. Building on this, I propose an alternative measure of job openings in the economy. This measure (i) better predicts hiring at the plant level and (ii) provides a better fitting aggregate matching function vis-à-vis the traditional vacancy measure. Firm Level Evidence from Two Vacancy Measures. Using firm level survey and register data for both Sweden and Denmark we show systematic mis-measurement in both vacancy measures. While the register-based measure on the aggregate constitutes a quarter of the survey-based measure, the latter is not a super-set of the former. To obtain the full set of unique vacancies in these two databases, the number of survey vacancies should be multiplied by approximately 1.2. Importantly, this adjustment factor varies over time and across firm characteristics. Our findings have implications for both the search-matching literature and policy analysis based on vacancy measures: observed changes in vacancies can be an outcome of changes in mis-measurement, and are not necessarily changes in the actual number of vacancies. Swedish Unemployment Dynamics. We study the contribution of different labor market flows to business cycle variations in unemployment in the context of a dual labor market. To this end, we develop a decomposition method that allows for a distinction between permanent and temporary employment. We also allow for slow convergence to steady state which is characteristic of European labor markets. We apply the method to a new Swedish data set covering the period 1987-2012 and show that the relative contributions of inflows and outflows to/from unemployment are roughly 60/30. The remaining 10\% are due to flows not involving unemployment. Even though temporary contracts only cover 9-11\% of the working age population, variations in flows involving temporary contracts account for 44\% of the variation in unemployment. We also show that the importance of flows involving temporary contracts is likely to be understated if one does not account for non-steady state dynamics. The New Keynesian Transmission Mechanism: A Heterogeneous-Agent Perspective. We argue that a 2-agent version of the standard New Keynesian model---where a ``worker'' receives only labor income and a “capitalist'' only profit income---offers insights about how income inequality affects the monetary transmission mechanism. Under rigid prices, monetary policy affects the distribution of consumption, but it has no effect on output as workers choose not to change their hours worked in response to wage movements. In the corresponding representative-agent model, in contrast, hours do rise after a monetary policy loosening due to a wealth effect on labor supply: profits fall, thus reducing the representative worker's income. If wages are rigid too, however, the monetary transmission mechanism is active and resembles that in the corresponding representative-agent model. Here, workers are not on their labor supply curve and hence respond passively to demand, and profits are procyclical.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The oculomotor synergy as expressed by the CA/C and AC/A ratios was investigated to examine its influence on our previous observation that whereas convergence responses to stereoscopic images are generally stable, some individuals exhibit significant accommodative overshoot. Using a modified video refraction unit while viewing a stereoscopic LCD, accommodative and convergence responses to balanced and unbalanced vergence and focal stimuli (BVFS and UBVFS) were measured. Accommodative overshoot of at least 0.3 D was found in 3 out of 8 subjects for UBVFS. The accommodative response differential (RD) was taken to be the difference between the initial response and the subsequent mean static steady-state response. Without overshoot, RD was quantified by finding the initial response component. A mean RD of 0.11 +/- 0.27 D was found for the 1.0 D step UBVFS condition. The mean RD for the BVFS was 0.00 +/- 0.17 D. There was a significant positive correlation between CA/C ratio and RD (r = +0.75, n = 8, p <0.05) for only UBVFS. We propose that inter-subject variation in RD is influenced by the CA/C ratio as follows: an initial convergence response, induced by disparity of the image, generates convergence-driven accommodation commensurate with the CA/C ratio; the associated transient defocus subsequently decays to a balanced position between defocus-induced and convergence-induced accommodations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamical systems that involve impacts frequently arise in engineering. This Letter reports a study of such a system at microscale that consists of a nonlinear resonator operating with an unilateral impact. The microresonators were fabricated on silicon-on-insulator wafers by using a one-mask process and then characterised by using the capacitively driving and sensing method. Numerical results concerning the dynamics of this vibro-impact system were verified by the experiments. Bifurcation analysis was used to provide a qualitative scenario of the system steady-state solutions as a function of both the amplitude and the frequency of the external driving sinusoidal voltage. The results show that the amplitude of resonant peak is levelled off owing to the impact effect and that the bandwidth of impacting is dependent upon the nonlinearity and the operating conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thesis is concerned with the development and testing of a mathematical model of a distillation process in which the components react chemically. The formaldehyde-methanol-water system was selected and only the reversible reactions between formaldehyde and water giving methylene glycol and between formaldehyde and methanol producing hemiformal were assumed to occur under the distillation conditions. Accordingly the system has been treated as a five component system. The vapour-liquid equilibrium calculations were performed by solving iteratively the thermodynamic relationships expressing the phase equilibria with the stoichiometric equations expressing the chemical equilibria. Using optimisation techniques, the Wilson single parameters and Henry's constants were calculated for binary systems containing formaldehyde which was assumed to be a supercritical component whilst Wilson binary parameters were calculated for the remaining binary systems. Thus the phase equilibria for the formaldehyde system could be calculated using these parameters and good accuracy was obtained when calculated values were compared with experimental values. The distillation process was modelled using the mass and energy balance equations together with the phase equilibria calculations. The plate efficiencies were obtained from a modified A.I.Ch.E. Bubble Tray method. The resulting equations were solved by an iterative plate to plate calculation based on the Newton Raphson method. Experiments were carried out in a 76mm I.D., eight sieve plate distillation column and the results were compared with the mathematical model calculations. Overall, good agreement was obtained but some discrepancies were observed in the concentration profiles and these may have been caused by the effect of limited physical property data and a limited understanding of the reactions mechanism. The model equations were solved in the form of modular computer programs. Although they were written to describe the steady state distillation with simultaneous chemical reaction of the formaldehyde system, the approach used may be of wider application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work was to design, construct, test and operate a novel circulating fluid bed fast pyrolysis reactor system for production of liquids from biomass. The novelty lies in incorporating an integral char combustor to provide autothermal operation. A reactor design methodology was devised which correlated input parameters to process variables, namely temperature, heat transfer and gas/vapour residence time, for both the char combustor and biomass pyrolyser. From this methodology a CFB reactor was designed with integral char combustion for 10 kg/h biomass throughput. A full-scale cold model of the CFB unit was constructed and tested to derive suitable hydrodynamic relationships and performance constraints. Early difficulties encountered with poor solids circulation and inefficient product recovery were overcome by a series of modifications. A total of 11 runs in a pyrolysis mode were carried out with a maximum total liquids yield of 61.50% wt on a maf biomass basis, obtained at 500°C and with 0.46 s gas/vapour residence time. This could be improved by improved vapour recovery by direct quenching up to an anticipated 75 % wt on a moisture-and-ash-free biomass basis. The reactor provides a very high specific throughput of 1.12 - 1.48 kg/hm2 and the lowest gas-to-feed ratio of 1.3 - 1.9 kg gas/kg feed compared to other fast pyrolysis processes based on pneumatic reactors and has a good scale-up potential. These features should provide significant capital cost reduction. Results to date suggest that the process is limited by the extent of char combustion. Future work will address resizing of the char combustor to increase overall system capacity, improvement in solid separation and substantially better liquid recovery. Extended testing will provide better evaluation of steady state operation and provide data for process simulation and reactor modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this work was to develop a generic methodology for evaluating and selecting, at the conceptual design phase of a project, the best process technology for Natural Gas conditioning. A generic approach would be simple and require less time and would give a better understanding of why one process is to be preferred over another. This will lead to a better understanding of the problem. Such a methodology would be useful in evaluating existing, novel and hybrid technologies. However, to date no information is available in the published literature on such a generic approach to gas processing. It is believed that the generic methodology presented here is the first available for choosing the best or cheapest method of separation for natural gas dew-point control. Process cost data are derived from evaluations carried out by the vendors. These evaluations are then modelled using a steady-state simulation package. From the results of the modelling the cost data received are correlated and defined with respect to the design or sizing parameters. This allows comparisons between different process systems to be made in terms of the overall process. The generic methodology is based on the concept of a Comparative Separation Cost. This takes into account the efficiency of each process, the value of its products, and the associated costs. To illustrate the general applicability of the methodology, three different cases suggested by BP Exploration are evaluated. This work has shown that it is possible to identify the most competitive process operations at the conceptual design phase and illustrate why one process has an advantage over another. Furthermore, the same methodology has been used to identify and evaluate hybrid processes. It has been determined here that in some cases they offer substantial advantages over the separate process techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several fermentation methods for the production of the enzyme dextransucrase have been employed. The theoretical aspects of these fermentation techniques have been given in the early chapters of this thesis together with a brief overview of enzyme biotechnology. A literature survey on cell recycle fermentation has been carried out followed by a survey report on dextransucrase production, purification and the reaction mechanism of dextran biosynthesis. The various experimental apparatus as employed in this research are described in detail. In particular, emphasis has been given to the development of continuous cell recycle fermenters. On the laboratory scale, fed-batch fermentations under anaerobic low agitation conditions resulted in dextransucrase activities of about 450 DSU/cm3 which are much higher than the yields reported in the literature and obtained under aerobic conditions. In conventional continuous culture the dilution rate was varied in the range between 0.375 h-1 to 0.55 h-1. The general pattern observed from the data obtained was that the enzyme activity decreased with increase in dilution rate. In these experiments the maximum value of enzyme activity was ∼74 DSU/cm3. Sparging the fermentation broth with CO2 in continuous culture appears to result in a decrease in enzyme activity. In continuous total cell recycle fermentations high steady state biomass levels were achieved but the enzyme activity was low, in the range 4 - 27 DSU/cm3. This fermentation environment affected the physiology of the microorganism. The behaviour of the cell recycle system employed in this work together with its performance and the factors that affected it are discussed in the relevant chapters. By retaining the whole broth leaving a continuous fermenter for between 1.5 - 4 h under controlled conditions, the enzyme activity was enhanced with a certain treatment from 86 DSU/cm3 to 180 DSU/cm3 which represents a 106% increase over the enzyme activity achieved by a steady-state conventional chemostat. A novel process for dextran production has been proposed based on the findings of this latter part of the experimental work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes the design and implementation of an interactive dynamic simulator called DASPRII. The starting point of this research has been an existing dynamic simulation package, DASP. DASPII is written in standard FORTRAN 77 and is implemented on universally available IBM-PC or compatible machines. It provides a means for the analysis and design of chemical processes. Industrial interest in dynamic simulation has increased due to the recent increase in concern over plant operability, resiliency and safety. DASPII is an equation oriented simulation package which allows solution of dynamic and steady state equations. The steady state can be used to initialise the dynamic simulation. A robust non linear algebraic equation solver has been implemented for steady state solution. This has increased the general robustness of DASPII, compared to DASP. A graphical front end is used to generate the process flowsheet topology from a user constructed diagram of the process. A conversational interface is used to interrogate the user with the aid of a database, to complete the topological information. An original modelling strategy implemented in DASPII provides a simple mechanism for parameter switching which creates a more flexible simulation environment. The problem description generated is by a further conversational procedure using a data-base. The model format used allows the same model equations to be used for dynamic and steady state solution. All the useful features of DASPI are retained in DASPII. The program has been demonstrated and verified using a number of example problems, Significant improvements using the new NLAE solver have been shown. Topics requiring further research are described. The benefits of variable switching in models has been demonstrated with a literature problem.