864 resultados para fate and effect modelling
Resumo:
Objectives: The aim of the study was to characterise the population pharmacokinetics (popPK) properties of itraconazole (ITRA) and its active metabolite hydroxy-ITRA in a representative paediatric population of cystic fibrosis (CF) and bone marrow transplant (BMT) patients. The goals were to determine the relative bioavailability between the two oral formulations, and to explore improved dosage regimens in these patients. Methods: All paediatric patients with CF taking oral ITRA for the treatment of allergic bronchopulmonary aspergillosis and patients undergoing BMT who were taking ITRA for prophylaxis of any fungal infection were eligible for the study. A minimum of two blood samples were drawn after the capsules and also after switching to oral solution, or vice versa. ITRA and hydroxy-ITRA plasma concentrations were measured by HPLC[1]. A nonlinear mixed-effect modelling approach (NONMEM 5.1.1) was used to describe the PK of ITRA and hydroxy-ITRA simultaneously. Simulations were used to assess dosing strategies in these patients. Results: Forty-nine patients (29CF, 20 BMT) were recruited to the study who provided 227 blood samples for the population analysis. A 1-compartment model with 1st order absorption and elimination best described ITRA kinetics, with 1st order conversion to hydroxy-ITRA. For ITRA, the apparent clearance (ClItra/F) and volume of distribution (Vitra/F) was 35.5L/h and 672L, respectively; the absorption rate constant for the capsule formulation was 0.0901 h-1 and for the oral solution formulation it was 0.959 h-1. The capsule comparative bioavailability (vs. solution) was 0.55. For hydroxy-ITRA, the apparent volume of distribution and clearance were 10.6 L and 5.28 L/h, respectively. Of several screened covariates only allometrically scaled total body weight significantly improved the fit to the data. No difference between the two populations was found. Conclusion: The developed popPK model adequately described the pharmacokinetics of ITRA and hydroxy-ITRA in paediatric patients with CF and patients undergoing BMT. High inter-patient variability confirmed previous data in CF[2], leukaemia and BMT[3] patients. From the population model, simulations showed the standard dose (5 mg/kg/day) needs to be doubled for the solution formulation and even 4 times more given of the capsules to achieve an adequate target therapeutic trough plasma concentration of 0.5 mg/L[4] in these patients.
Modelling carbon dynamics within tropical rainforest environments: Using the 3-PG and process models
Resumo:
This paper argues the use of reusable simulation templates as a tool that can help to predict the effect of e-business introduction on business processes. First, a set of requirements for e-business modelling is introduced and modelling options described. Traditional business process mapping techniques are examined as a way of identifying potential changes. Whilst paper-based process mapping may not highlight significant differences between traditional and e-business processes, simulation does allow the real effects of e-business to be identified. Simulation has the advantage of capturing the dynamic characteristics of the process, thus reflecting more accurately the changes in behaviour. This paper shows the value of using generic process maps as a starting point for collecting the data that is needed to build the simulation and proposes the use of reusable templates/components for the speedier building of e-business simulation models.
Resumo:
Experimental investigations and computer modelling studies have been made on the refrigerant-water counterflow condenser section of a small air to water heat pump. The main object of the investigation was a comparative study between the computer modelling predictions and the experimental observations for a range of operating conditions but other characteristics of a counterflow heat exchanger are also discussed. The counterflow condenser consisted of 15 metres of a thermally coupled pair of copper pipes, one containing the R12 working fluid and the other water flowing in the opposite direction. This condenser was mounted horizontally and folded into 0.5 metre straight sections. Thermocouples were inserted in both pipes at one metre intervals and transducers for pressure and flow measurement were also included. Data acquisition, storage and analysis was carried out by a micro-computer suitably interfaced with the transducers and thermocouples. Many sets of readings were taken under a variety of conditions, with air temperature ranging from 18 to 26 degrees Celsius, water inlet from 13.5 to 21.7 degrees, R12 inlet temperature from 61.2 to 81.7 degrees and water mass flow rate from 6.7 to 32.9 grammes per second. A Fortran computer model of the condenser (originally prepared by Carrington[1]) has been modified to match the information available from experimental work. This program uses iterative segmental integration over the desuperheating, mixed phase and subcooled regions for the R12 working fluid, the water always being in the liquid phase. Methods of estimating the inlet and exit fluid conditions from the available experimental data have been developed for application to the model. Temperature profiles and other parameters have been predicted and compared with experimental values for the condenser for a range of evaporator conditions and have shown that the model gives a satisfactory prediction of the physical behaviour of a simple counterflow heat exchanger in both single phase and two phase regions.
Resumo:
Experiments and theoretical modelling have been carried out to predict the performance of a solar-powered liquid desiccant cooling system for greenhouses. We have tested two components of the system in the laboratory using MgCl2 desiccant: (i) a regenerator which was tested under a solar simulator and (ii) a desiccator which was installed in a test duct. Theoretical models have been developed for both regenerator and desiccator and gave good agreement with the experiments. The verified computer model is used to predict the performance of the whole system during the hot summer months in Mumbai, Chittagong, Muscat, Messina and Havana. Taking examples of temperate, sub-tropical, tropical and heat-tolerant tropical crops (lettuce, soya bean, tomato and cucumber respectively) we estimate the extensions in growing seasons enabled by the system. Compared to conventional evaporative cooling, the desiccant system lowers average daily maximum temperatures in the hot season by 5.5-7.5 °C, sufficient to maintain viable growing conditions for lettuce throughout the year. In the case of tomato, cucumber and soya bean the system enables optimal cultivation through most summer months. It is concluded that the concept is technically viable and deserves testing by means of a pilot installation at an appropriate location.
Resumo:
This study proposes an integrated analytical framework for effective management of project risks using combined multiple criteria decision-making technique and decision tree analysis. First, a conceptual risk management model was developed through thorough literature review. The model was then applied through action research on a petroleum oil refinery construction project in the Central part of India in order to demonstrate its effectiveness. Oil refinery construction projects are risky because of technical complexity, resource unavailability, involvement of many stakeholders and strict environmental requirements. Although project risk management has been researched extensively, practical and easily adoptable framework is missing. In the proposed framework, risks are identified using cause and effect diagram, analysed using the analytic hierarchy process and responses are developed using the risk map. Additionally, decision tree analysis allows modelling various options for risk response development and optimises selection of risk mitigating strategy. The proposed risk management framework could be easily adopted and applied in any project and integrated with other project management knowledge areas.
Resumo:
The human accommodation system has been extensively examined for over a century, with a particular focus on trying to understand the mechanisms that lead to the loss of accommodative ability with age (Presbyopia). The accommodative process, along with the potential causes of presbyopia, are disputed; hindering efforts to develop methods of restoring accommodation in the presbyopic eye. One method that can be used to provide insight into this complex area is Finite Element Analysis (FEA). The effectiveness of FEA in modelling the accommodative process has been illustrated by a number of accommodative FEA models developed to date. However, there have been limitations to these previous models; principally due to the variation in data on the geometry of the accommodative components, combined with sparse measurements of their material properties. Despite advances in available data, continued oversimplification has occurred in the modelling of the crystalline lens structure and the zonular fibres that surround the lens. A new accommodation model was proposed by the author that aims to eliminate these limitations. A novel representation of the zonular structure was developed, combined with updated lens and capsule modelling methods. The model has been designed to be adaptable so that a range of different age accommodation systems can be modelled, allowing the age related changes that occur to be simulated. The new modelling methods were validated by comparing the changes induced within the model to available in vivo data, leading to the definition of three different age models. These were used in an extended sensitivity study on age related changes, where individual parameters were altered to investigate their effect on the accommodative process. The material properties were found to have the largest impact on the decline in accommodative ability, in particular compared to changes in ciliary body movement or zonular structure. Novel data on the importance of the capsule stiffness and thickness was also established. The new model detailed within this thesis provides further insight into the accommodation mechanism, as well as a foundation for future, more detailed investigations into accommodation, presbyopia and accommodative restoration techniques.
Resumo:
This research investigates specific ash control methods to limit inorganic content within biomass prior to fast pyrolysis and effect of specific ash components on fast pyrolysis processing, mass balance yields and bio-oil quality and stability. Inorganic content in miscanthus was naturally reduced over the winter period from June (7.36 wt. %) to February (2.80 wt. %) due to a combination of senescence and natural leaching from rain water. September harvest produced similar mass balance yields, bio-oil quality and stability compared to February harvest (conventional harvest), but nitrogen content in above ground crop was to high (208 kg ha.-1) to maintain sustainable crop production. Deionised water, 1.00% HCl and 0.10% Triton X-100 washes were used to reduce inorganic content of miscanthus. Miscanthus washed with 0.10% Triton X-100 resulted in the highest total liquid yield (76.21 wt. %) and lowest char and reaction water yields (9.77 wt. % and 8.25 wt. % respectively). Concentrations of Triton X-100 were varied to study further effects on mass balance yields and bio-oil stability. All concentrations of Triton X-100 increased total liquid yield and decreased char and reaction water yields compared to untreated miscanthus. In terms of bio-oil stability 1.00% Triton X-100 produced the most stable bio-oil with lowest viscosity index (2.43) and lowest water content index (1.01). Beech wood was impregnated with potassium and phosphorus resulting in lower liquid yields and increased char and gas yields due to their catalytic effect on fast pyrolysis product distribution. Increased potassium and phosphorus concentrations produced less stable bio-oils with viscosity and water content indexes increasing. Fast pyrolysis processing of phosphorus impregnated beech wood was problematic as the reactor bed material agglomerated into large clumps due to char formation within the reactor, affecting fluidisation and heat transfer.
Resumo:
Self-awareness and self-expression are promising architectural concepts for embedded systems to be equipped with to match them with dedicated application scenarios and constraints in the avionic and space-flight industry. Typically, these systems operate in largely undefined environments and are not reachable after deployment for a long time or even never ever again. This paper introduces a reference architecture as well as a novel modelling and simulation environment for self-aware and self-expressive systems with transaction level modelling, simulation and detailed modelling capabilities for hardware aspects, precise process chronology execution as well as fine timing resolutions. Furthermore, industrial relevant system sizes with several self-aware and self-expressive nodes can be handled by the modelling and simulation environment.
Resumo:
Since the Exxon Valdez accident in 1987, renewed interest has come forth to better understand and predict the fate and transport of crude oil lost to marine environments. The short-term fate of an Arabian Crude oil was simulated in laboratory experiments using artificial seawater. The time-dependent changes in the rheological and chemical properties of the oil under the influence of natural weathering processes were characterized, including dispersion behavior of the oil under simulated ocean turbulence. Methodology included monitoring the changes in the chemical composition of the oil by Gas Chromatography/Mass Spectrometry (GCMS), toxicity evaluations for the oil dispersions by Microtox analysis, and quantification of dispersed soluble aromatics by fluorescence spectrometry. Results for this oil show a sharp initial increase in viscosity, due to evaporative losses of lower molecular weight hydrocarbons, with the formation of stable water-in-oil emulsions occurring within one week. Toxicity evaluations indicate a decreased EC-50 value (higher toxicity) occurring after the oil has weathered eight hours, with maximum toxicity being observed after weathering seven days. Particle charge distributions, determined by electrophoretic techniques using a Coulter DELSA 440, reveal that an unstable oil dispersion exists within the size range of 1.5 to 2.5 um, with recombination processes being observed between sequential laser runs of a single sample.
Resumo:
Atomisation of an aqueous solution for tablet film coating is a complex process with multiple factors determining droplet formation and properties. The importance of droplet size for an efficient process and a high quality final product has been noted in the literature, with smaller droplets reported to produce smoother, more homogenous coatings whilst simultaneously avoiding the risk of damage through over-wetting of the tablet core. In this work the effect of droplet size on tablet film coat characteristics was investigated using X-ray microcomputed tomography (XμCT) and confocal laser scanning microscopy (CLSM). A quality by design approach utilising design of experiments (DOE) was used to optimise the conditions necessary for production of droplets at a small (20 μm) and large (70 μm) droplet size. Droplet size distribution was measured using real-time laser diffraction and the volume median diameter taken as a response. DOE yielded information on the relationship three critical process parameters: pump rate, atomisation pressure and coating-polymer concentration, had upon droplet size. The model generated was robust, scoring highly for model fit (R2 = 0.977), predictability (Q2 = 0.837), validity and reproducibility. Modelling confirmed that all parameters had either a linear or quadratic effect on droplet size and revealed an interaction between pump rate and atomisation pressure. Fluidised bed coating of tablet cores was performed with either small or large droplets followed by CLSM and XμCT imaging. Addition of commonly used contrast materials to the coating solution improved visualisation of the coating by XμCT, showing the coat as a discrete section of the overall tablet. Imaging provided qualitative and quantitative evidence revealing that smaller droplets formed thinner, more uniform and less porous film coats.
Resumo:
Previous studies about the strength of the lithosphere in the Iberia centre fail to resolve the depth of earthquakes because of the rheological uncertainties. Therefore, new contributions are considered (the crustal structure from a density model) and several parameters (tectonic regime, mantle rheology, strain rate) are checked in this paper to properly examine the role of lithospheric strength in the intraplate seismicity and the Cenozoic evolution. The strength distribution with depth, the integrated strength, the effective elastic thickness and the seismogenic thickness have been calculated by a finite element modelling of the lithosphere across the Central System mountain range and the bordering Duero and Madrid sedimentary basins. Only a dry mantle under strike-slip/extension and a strain rate of 10-15 s-1, or under extension and 10-16 s-1, causes a strong lithosphere. The integrated strength and the elastic thickness are lower in the mountain chain than in the basins. These anisotropies have been maintained since the Cenozoic and determine the mountain uplift and the biharmonic folding of the Iberian lithosphere during the Alpine deformations. The seismogenic thickness bounds the seismic activity in the upper–middle crust, and the decreasing crustal strength from the Duero Basin towards the Madrid Basin is related to a parallel increase in Plio–Quaternary deformations and seismicity. However, elasto–plastic modelling shows that current African–Eurasian convergence is resolved elastically or ductilely, which accounts for the low seismicity recorded in this region.
Resumo:
Once the preserve of university academics and research laboratories with high-powered and expensive computers, the power of sophisticated mathematical fire models has now arrived on the desk top of the fire safety engineer. It is a revolution made possible by parallel advances in PC technology and fire modelling software. But while the tools have proliferated, there has not been a corresponding transfer of knowledge and understanding of the discipline from expert to general user. It is a serious shortfall of which the lack of suitable engineering courses dealing with the subject is symptomatic, if not the cause. The computational vehicles to run the models and an understanding of fire dynamics are not enough to exploit these sophisticated tools. Too often, they become 'black boxes' producing magic answers in exciting three-dimensional colour graphics and client-satisfying 'virtual reality' imagery. As well as a fundamental understanding of the physics and chemistry of fire, the fire safety engineer must have at least a rudimentary understanding of the theoretical basis supporting fire models to appreciate their limitations and capabilities. The five day short course, "Principles and Practice of Fire Modelling" run by the University of Greenwich attempt to bridge the divide between the expert and the general user, providing them with the expertise they need to understand the results of mathematical fire modelling. The course and associated text book, "Mathematical Modelling of Fire Phenomena" are aimed at students and professionals with a wide and varied background, they offer a friendly guide through the unfamiliar terrain of mathematical modelling. These concepts and techniques are introduced and demonstrated in seminars. Those attending also gain experience in using the methods during "hands-on" tutorial and workshop sessions. On completion of this short course, those participating should: - be familiar with the concept of zone and field modelling; - be familiar with zone and field model assumptions; - have an understanding of the capabilities and limitations of modelling software packages for zone and field modelling; - be able to select and use the most appropriate mathematical software and demonstrate their use in compartment fire applications; and - be able to interpret model predictions. The result is that the fire safety engineer is empowered to realise the full value of mathematical models to help in the prediction of fire development, and to determine the consequences of fire under a variety of conditions. This in turn enables him or her to design and implement safety measures which can potentially control, or at the very least reduce the impact of fire.