942 resultados para Climatic data simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aim To develop an appropriate dosing strategy for continuous intravenous infusions (CII) of enoxaparin by minimizing the percentage of steady-state anti-Xa concentration (C-ss) outside the therapeutic range of 0.5-1.2 IU ml(-1). Methods A nonlinear mixed effects model was developed with NONMEM (R) for 48 adult patients who received CII of enoxaparin with infusion durations that ranged from 8 to 894 h at rates between 100 and 1600 IU h(-1). Three hundred and sixty-three anti-Xa concentration measurements were available from patients who received CII. These were combined with 309 anti-Xa concentrations from 35 patients who received subcutaneous enoxaparin. The effects of age, body size, height, sex, creatinine clearance (CrCL) and patient location [intensive care unit (ICU) or general medical unit] on pharmacokinetic (PK) parameters were evaluated. Monte Carlo simulations were used to (i) evaluate covariate effects on C-ss and (ii) compare the impact of different infusion rates on predicted C-ss. The best dose was selected based on the highest probability that the C-ss achieved would lie within the therapeutic range. Results A two-compartment linear model with additive and proportional residual error for general medical unit patients and only a proportional error for patients in ICU provided the best description of the data. Both CrCL and weight were found to affect significantly clearance and volume of distribution of the central compartment, respectively. Simulations suggested that the best doses for patients in the ICU setting were 50 IU kg(-1) per 12 h (4.2 IU kg(-1) h(-1)) if CrCL < 30 ml min(-1); 60 IU kg(-1) per 12 h (5.0 IU kg(-1) h(-1)) if CrCL was 30-50 ml min(-1); and 70 IU kg(-1) per 12 h (5.8 IU kg(-1) h(-1)) if CrCL > 50 ml min(-1). The best doses for patients in the general medical unit were 60 IU kg(-1) per 12 h (5.0 IU kg(-1) h(-1)) if CrCL < 30 ml min(-1); 70 IU kg(-1) per 12 h (5.8 IU kg(-1) h(-1)) if CrCL was 30-50 ml min(-1); and 100 IU kg(-1) per 12 h (8.3 IU kg(-1) h(-1)) if CrCL > 50 ml min(-1). These best doses were selected based on providing the lowest equal probability of either being above or below the therapeutic range and the highest probability that the C-ss achieved would lie within the therapeutic range. Conclusion The dose of enoxaparin should be individualized to the patients' renal function and weight. There is some evidence to support slightly lower doses of CII enoxaparin in patients in the ICU setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was undertaken to develop a simple laboratory-based method for simulating the freezing profiles of beef trim so that their effect on E. coli 0157 survival could be better assessed. A commercially available apparatus of the type used for freezing embryos, together with an associated temperature logger and software, was used for this purpose with a -80 degrees C freezer as a heat sink. Four typical beef trim freezing profiles, of different starting temperatures or lengths, were selected and modelled as straight lines for ease of manipulation. A further theoretical profile with an extended freezing plateau was also developed. The laboratory-based setup worked well and the modelled freezing profiles fitted closely to the original data. No change in numbers of any of the strains was apparent for the three simulated profiles of different lengths starting at 25 degrees C. Slight but significant (P < 0.05) decreases in numbers (similar to 0.2 log cfu g(-1)) of all strains were apparent for a profile starting at 12 degrees C. A theoretical version of this profile with a freezing plateau phase extended from 11 h to 17 h resulted in significant (P < 0.05) decreases in numbers (similar to 1.2 log cfu g(-1)) of all strains. Results indicated possible avenues for future research in controlling this pathogen. The method developed in this study proved a useful and cost-effective way for simulating freezing profiles of beef trim. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Brugada syndrome (BS) is a genetic disease identified by an abnormal electrocardiogram ( ECG) ( mainly abnormal ECGs associated with right bundle branch block and ST-elevation in right precordial leads). BS can lead to increased risk of sudden cardiac death. Experimental studies on human ventricular myocardium with BS have been limited due to difficulties in obtaining data. Thus, the use of computer simulation is an important alternative. Most previous BS simulations were based on animal heart cell models. However, due to species differences, the use of human heart cell models, especially a model with three-dimensional whole-heart anatomical structure, is needed. In this study, we developed a model of the human ventricular action potential (AP) based on refining the ten Tusscher et al (2004 Am. J. Physiol. Heart Circ. Physiol. 286 H1573 - 89) model to incorporate newly available experimental data of some major ionic currents of human ventricular myocytes. These modified channels include the L-type calcium current (ICaL), fast sodium current (I-Na), transient outward potassium current (I-to), rapidly and slowly delayed rectifier potassium currents (I-Kr and I-Ks) and inward rectifier potassium current (I-Ki). Transmural heterogeneity of APs for epicardial, endocardial and mid-myocardial (M) cells was simulated by varying the maximum conductance of IKs and Ito. The modified AP models were then used to simulate the effects of BS on cellular AP and body surface potentials using a three-dimensional dynamic heart - torso model. Our main findings are as follows. (1) BS has little effect on the AP of endocardial or mid-myocardial cells, but has a large impact on the AP of epicardial cells. (2) A likely region of BS with abnormal cell AP is near the right ventricular outflow track, and the resulting ST-segment elevation is located in the median precordium area. These simulation results are consistent with experimental findings reported in the literature. The model can reproduce a variety of electrophysiological behaviors and provides a good basis for understanding the genesis of abnormal ECG under the condition of BS disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GCMC simulations are applied to the adsorption of sub-critical methanol and ethanol on graphitized carbon black at 300 K. The carbon black was modelled both with and without carbonyl functional groups. Large differences are seen between the amounts adsorbed for different carbonyl configurations at low pressure prior to monolayer coverage. Once a monolayer has been formed on the carbon black, the adsorption behaviour is similar between the model surfaces with and without functional groups. Simulation isotherms for the case of low carbonyl concentrations or no carbonyls are qualitatively similar to the few experimental isotherms available in the literature for methanol and ethanol adsorption on highly graphitized carbon black. Isosteric heats and adsorbed phase heat capacities are shown to be very sensitive to carbonyl configurations. A maximum is observed in the adsorbed phase heat capacity of the alcohols for all simulations but is unrealistically high for the case of a plain graphite surface. The addition of carbonyls to the surface greatly reduces this maximum and approaches experimental data with carbonyl concentration as low as 0.09 carbonyls/nm(2).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider how refinements between state-based specifications (e.g., written in Z) can be checked by use of a model checker. Specifically, we are interested in the verification of downward and upward simulations which are the standard approach to verifying refinements in state-based notations. We show how downward and upward simulations can be checked using existing temporal logic model checkers. In particular, we show how the branching time temporal logic CTL can be used to encode the standard simulation conditions. We do this for both a blocking, or guarded, interpretation of operations (often used when specifying reactive systems) as well as the more common non-blocking interpretation of operations used in many state-based specification languages (for modelling sequential systems). The approach is general enough to use with any state-based specification language, and we illustrate how refinements between Z specifications can be checked using the SAL CTL model checker using a small example.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The adsorption of Lennard-Jones fluids (argon and nitrogen) onto a graphitized thermal carbon black surface was studied with a Grand Canonical Monte Carlo Simulation (GCMC). The surface was assumed to be finite in length and composed of three graphene layers. When the GCMC simulation was used to describe adsorption on a graphite surface, an over-prediction of the isotherm was consistently observed in the pressure regions where the first and second layers are formed. To remove this over-prediction, surface mediation was accounted for to reduce the fluid-fluid interaction. Do and co-workers have introduced the so-called surface-mediation damping factor to correct the over-prediction for the case of a graphite surface of infinite extent, and this approach has yielded a good description of the adsorption isotherm. In this paper, the effects of the finite size of the graphene layer on the adsorption isotherm and how these would affect the extent of the surface mediation were studied. It was found that this finite-surface model provides a better description of the experimental data for graphitized thermal carbon black of high surface area (i.e. small crystallite size) while the infinite- surface model describes data for carbon black of very low surface area (i.e. large crystallite size).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the simple quasi-static technique, also called the adiabatic mapping technique, can be used to determine the energetics of rotation of methyl and methoxy groups in amorphous poly(vinyl methyl ether) even though the latter process is too slow to be amenable to direct molecular dynamics simulation. For the methyl group rotation, we find that the mean and standard deviation of the simulated rotational barrier heights agree well with experimental data from quasi-elastic neutron scattering. In the case of the methoxy groups we find that just 4% of the groups contribute more than 90% of the observed dielectric relaxation strength. The groups which make the most contribution are those which, by virtue of their particular conformation and local environment, have two alternative positions of similar energy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To foster ongoing international cooperation beyond ACES (APEC Cooperation for Earthquake Simulation) on the simulation of solid earth phenomena, agreement was reached to work towards establishment of a frontier international research institute for simulating the solid earth: iSERVO = International Solid Earth Research Virtual Observatory institute (http://www.iservo.edu.au). This paper outlines a key Australian contribution towards the iSERVO institute seed project, this is the construction of: (1) a typical intraplate fault system model using practical fault system data of South Australia (i.e., SA interacting fault model), which includes data management and editing, geometrical modeling and mesh generation; and (2) a finite-element based software tool, which is built on our long-term and ongoing effort to develop the R-minimum strategy based finite-element computational algorithm and software tool for modelling three-dimensional nonlinear frictional contact behavior between multiple deformable bodies with the arbitrarily-shaped contact element strategy. A numerical simulation of the SA fault system is carried out using this software tool to demonstrate its capability and our efforts towards seeding the iSERVO Institute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The structure of a comprehensive research project into mine fires study applying the Ventgraph mine fire simulation software, preplanning of escape scenarios and general interaction with rescue responses is outlined. The project has Australian Coal Association Research Program (ACARP) funding and also relies on substantial mining company site support. This practical input from mine operators is essential and allows the approach to be introduced in the most creditable way. The effort is built around the introduction of fire simulation computer software to the Australian mining industry and the consequent modelling of fire scenarios in selected different mine layouts. Application of the simulation software package to the changing mine layouts requires experience to achieve realistic outcomes. Most Australian mines of size currently use a ventilation network simulation program. Under the project a small subroutine has been written to transfer the input data from the existing mine ventilation network simulation program to ‘Ventgraph’. This has been tested successfully. To understand fire simulation behaviour on the mine ventilation system, it is necessary to understood the possible effects of mine fires on various mine ventilation systems correctly first. Case studies demonstrating the possible effects of fires on some typical Australian coal mine ventilation circuits have been examined. The situation in which there is some gas make at the face and effects with fire have also been developed to emphasise how unstable and dangerous situations may arise. The primary objective of the part of the study described in this paper is to use mine fire simulation software to gain better understanding of how spontaneous combustion initiated fires can interact with the complex ventilation behaviour underground during a substantial fire. It focuses on the simulation of spontaneous combustion sourced heatings that develop into open fires. Further, it examines ventilation behaviour effects of spontaneous combustion initiated pillar fires and examines the difficulties these can be present if a ventilation reversal occurs. It also briefly examines simulation of use of the inertisation to assist in mine recovery. Mine fires are recognised across the world as a major hazard issue. New approaches allowing improvement in understanding their consequences have been developed as an aid in handling this complex area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Operator Choice Model (OCM) was developed to model the behaviour of operators attending to complex tasks involving interdependent concurrent activities, such as in Air Traffic Control (ATC). The purpose of the OCM is to provide a flexible framework for modelling and simulation that can be used for quantitative analyses in human reliability assessment, comparison between human computer interaction (HCI) designs, and analysis of operator workload. The OCM virtual operator is essentially a cycle of four processes: Scan Classify Decide Action Perform Action. Once a cycle is complete, the operator will return to the Scan process. It is also possible to truncate a cycle and return to Scan after each of the processes. These processes are described using Continuous Time Probabilistic Automata (CTPA). The details of the probability and timing models are specific to the domain of application, and need to be specified using domain experts. We are building an application of the OCM for use in ATC. In order to develop a realistic model we are calibrating the probability and timing models that comprise each process using experimental data from a series of experiments conducted with student subjects. These experiments have identified the factors that influence perception and decision making in simplified conflict detection and resolution tasks. This paper presents an application of the OCM approach to a simple ATC conflict detection experiment. The aim is to calibrate the OCM so that its behaviour resembles that of the experimental subjects when it is challenged with the same task. Its behaviour should also interpolate when challenged with scenarios similar to those used to calibrate it. The approach illustrated here uses logistic regression to model the classifications made by the subjects. This model is fitted to the calibration data, and provides an extrapolation to classifications in scenarios outside of the calibration data. A simple strategy is used to calibrate the timing component of the model, and the results for reaction times are compared between the OCM and the student subjects. While this approach to timing does not capture the full complexity of the reaction time distribution seen in the data from the student subjects, the mean and the tail of the distributions are similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visualization has proven to be a powerful and widely-applicable tool the analysis and interpretation of data. Most visualization algorithms aim to find a projection from the data space down to a two-dimensional visualization space. However, for complex data sets living in a high-dimensional space it is unlikely that a single two-dimensional projection can reveal all of the interesting structure. We therefore introduce a hierarchical visualization algorithm which allows the complete data set to be visualized at the top level, with clusters and sub-clusters of data points visualized at deeper levels. The algorithm is based on a hierarchical mixture of latent variable models, whose parameters are estimated using the expectation-maximization algorithm. We demonstrate the principle of the approach first on a toy data set, and then apply the algorithm to the visualization of a synthetic data set in 12 dimensions obtained from a simulation of multi-phase flows in oil pipelines and to data in 36 dimensions derived from satellite images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We use molecular dynamics simulations to compare the conformational structure and dynamics of a 21-base pair RNA sequence initially constructed according to the canonical A-RNA and A'-RNA forms in the presence of counterions and explicit water. Our study aims to add a dynamical perspective to the solid-state structural information that has been derived from X-ray data for these two characteristic forms of RNA. Analysis of the three main structural descriptors commonly used to differentiate between the two forms of RNA namely major groove width, inclination and the number of base pairs in a helical twist over a 30 ns simulation period reveals a flexible structure in aqueous solution with fluctuations in the values of these structural parameters encompassing the range between the two crystal forms and more. This provides evidence to suggest that the identification of distinct A-RNA and A'-RNA structures, while relevant in the crystalline form, may not be generally relevant in the context of RNA in the aqueous phase. The apparent structural flexibility observed in our simulations is likely to bear ramifications for the interactions of RNA with biological molecules (e.g. proteins) and non-biological molecules (e.g. non-viral gene delivery vectors). © CSIRO 2009.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper argues the use of reusable simulation templates as a tool that can help to predict the effect of e-business introduction on business processes. First, a set of requirements for e-business modelling is introduced and modelling options described. Traditional business process mapping techniques are examined as a way of identifying potential changes. Whilst paper-based process mapping may not highlight significant differences between traditional and e-business processes, simulation does allow the real effects of e-business to be identified. Simulation has the advantage of capturing the dynamic characteristics of the process, thus reflecting more accurately the changes in behaviour. This paper shows the value of using generic process maps as a starting point for collecting the data that is needed to build the simulation and proposes the use of reusable templates/components for the speedier building of e-business simulation models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In multilevel analyses, problems may arise when using Likert-type scales at the lowest level of analysis. Specifically, increases in variance should lead to greater censoring for the groups whose true scores fall at either end of the distribution. The current study used simulation methods to examine the influence of single-item Likert-type scale usage on ICC(1), ICC(2), and group-level correlations. Results revealed substantial underestimation of ICC(1) when using Likert-type scales with common response formats (e.g., 5 points). ICC(2) and group-level correlations were also underestimated, but to a lesser extent. Finally, the magnitude of underestimation was driven in large part to an interaction between Likert-type scale usage and the amounts of within- and between-group variance. © Sage Publications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work reported in this paper is part of a project simulating maintenance operations in an automotive engine production facility. The decisions made by the people in charge of these operations form a crucial element of this simulation. Eliciting this knowledge is problematic. One approach is to use the simulation model as part of the knowledge elicitation process. This paper reports on the experience so far with using a simulation model to support knowledge management in this way. Issues are discussed regarding the data available, the use of the model, and the elicitation process itself. © 2004 Elsevier B.V. All rights reserved.