888 resultados para Web modelling methods
Resumo:
In this work we discuss the effects of white and coloured noise perturbations on the parameters of a mathematical model of bacteriophage infection introduced by Beretta and Kuang in [Math. Biosc. 149 (1998) 57]. We numerically simulate the strong solutions of the resulting systems of stochastic ordinary differential equations (SDEs), with respect to the global error, by means of numerical methods of both Euler-Taylor expansion and stochastic Runge-Kutta type. (C) 2003 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Background and Aims The morphogenesis and architecture of a rice plant, Oryza sativa, are critical factors in the yield equation, but they are not well studied because of the lack of appropriate tools for 3D measurement. The architecture of rice plants is characterized by a large number of tillers and leaves. The aims of this study were to specify rice plant architecture and to find appropriate functions to represent the 3D growth across all growth stages. Methods A japonica type rice, 'Namaga', was grown in pots under outdoor conditions. A 3D digitizer was used to measure the rice plant structure at intervals from the young seedling stage to maturity. The L-system formalism was applied to create '3D virtual rice' plants, incorporating models of phenological development and leaf emergence period as a function of temperature and photoperiod, which were used to determine the timing of tiller emergence. Key Results The relationships between the nodal positions and leaf lengths, leaf angles and tiller angles were analysed and used to determine growth functions for the models. The '3D virtual rice' reproduces the structural development of isolated plants and provides a good estimation of the fillering process, and of the accumulation of leaves. Conclusions The results indicated that the '3D virtual rice' has a possibility to demonstrate the differences in the structure and development between cultivars and under different environmental conditions. Future work, necessary to reflect both cultivar and environmental effects on the model performance, and to link with physiological models, is proposed in the discussion.
Resumo:
This paper presents a new method for producing a functional-structural plant model that simulates response to different growth conditions, yet does not require detailed knowledge of underlying physiology. The example used to present this method is the modelling of the mountain birch tree. This new functional-structural modelling approach is based on linking an L-system representation of the dynamic structure of the plant with a canonical mathematical model of plant function. Growth indicated by the canonical model is allocated to the structural model according to probabilistic growth rules, such as rules for the placement and length of new shoots, which were derived from an analysis of architectural data. The main advantage of the approach is that it is relatively simple compared to the prevalent process-based functional-structural plant models and does not require a detailed understanding of underlying physiological processes, yet it is able to capture important aspects of plant function and adaptability, unlike simple empirical models. This approach, combining canonical modelling, architectural analysis and L-systems, thus fills the important role of providing an intermediate level of abstraction between the two extremes of deeply mechanistic process-based modelling and purely empirical modelling. We also investigated the relative importance of various aspects of this integrated modelling approach by analysing the sensitivity of the standard birch model to a number of variations in its parameters, functions and algorithms. The results show that using light as the sole factor determining the structural location of new growth gives satisfactory results. Including the influence of additional regulating factors made little difference to global characteristics of the emergent architecture. Changing the form of the probability functions and using alternative methods for choosing the sites of new growth also had little effect. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Refinement in software engineering allows a specification to be developed in stages, with design decisions taken at earlier stages constraining the design at later stages. Refinement in complex data models is difficult due to lack of a way of defining constraints, which can be progressively maintained over increasingly detailed refinements. Category theory provides a way of stating wide scale constraints. These constraints lead to a set of design guidelines, which maintain the wide scale constraints under increasing detail. Previous methods of refinement are essentially local, and the proposed method does not interfere very much with these local methods. The result is particularly applicable to semantic web applications, where ontologies provide systems of more or less abstract constraints on systems, which must be implemented and therefore refined by participating systems. With the approach of this paper, the concept of committing to an ontology carries much more force. (c) 2005 Elsevier B.V. All rights reserved.
Pharmacokinetic-pharmacodynamic modelling of QT interval prolongation following citalopram overdoses
Resumo:
Aims To develop a pharmacokinetic-pharmacodynamic model describing the time-course of QT interval prolongation after citalopram overdose and to evaluate the effect of charcoal on the relative risk of developing abnormal QT and heart-rate combinations. Methods Plasma concentrations and electrocardiograph (ECG) data from 52 patients after 62 citalopram overdose events were analysed in WinBUGS using a Bayesian approach. The reported doses ranged from 20 to 1700 mg and on 17 of the events a single dose of activated charcoal was administered. The developed pharmacokinetic-pharmacodynamic model was used for predicting the probability of having abnormal combinations of QT-RR, which was assumed to be related to an increased risk for torsade de pointes (TdP). Results The absolute QT interval was related to the observed heart rate with an estimated individual heart-rate correction factor [alpha = 0.36, between-subject coefficient of variation (CV) = 29%]. The heart-rate corrected QT interval was linearly dependent on the predicted citalopram concentration (slope = 40 ms l mg(-1), between-subject CV = 70%) in a hypothetical effect-compartment (half-life of effect-delay = 1.4 h). The heart-rate corrected QT was predicted to be higher in women than in men and to increase with age. Administration of activated charcoal resulted in a pronounced reduction of the QT prolongation and was shown to reduce the risk of having abnormal combinations of QT-RR by approximately 60% for citalopram doses above 600 mg. Conclusion Citalopram caused a delayed lengthening of the QT interval. Administration of activated charcoal was shown to reduce the risk that the QT interval exceeds a previously defined threshold and therefore is expected to reduce the risk of TdP.
Resumo:
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.
Resumo:
In cell lifespan studies the exponential nature of cell survival curves is often interpreted as showing the rate of death is independent of the age of the cells within the population. Here we present an alternative model where cells that die are replaced and the age and lifespan of the population pool is monitored until a, steady state is reached. In our model newly generated individual cells are given a determined lifespan drawn from a number of known distributions including the lognormal, which is frequently found in nature. For lognormal lifespans the analytic steady-state survival curve obtained can be well-fit by a single or double exponential, depending on the mean and standard deviation. Thus, experimental evidence for exponential lifespans of one and/or two populations cannot be taken as definitive evidence for time and age independence of cell survival. A related model for a dividing population in steady state is also developed. We propose that the common adoption of age-independent, constant rates of change in biological modelling may be responsible for significant errors, both of interpretation and of mathematical deduction. We suggest that additional mathematical and experimental methods must be used to resolve the relationship between time and behavioural changes by cells that are predominantly unsynchronized.
Resumo:
Aim To develop an appropriate dosing strategy for continuous intravenous infusions (CII) of enoxaparin by minimizing the percentage of steady-state anti-Xa concentration (C-ss) outside the therapeutic range of 0.5-1.2 IU ml(-1). Methods A nonlinear mixed effects model was developed with NONMEM (R) for 48 adult patients who received CII of enoxaparin with infusion durations that ranged from 8 to 894 h at rates between 100 and 1600 IU h(-1). Three hundred and sixty-three anti-Xa concentration measurements were available from patients who received CII. These were combined with 309 anti-Xa concentrations from 35 patients who received subcutaneous enoxaparin. The effects of age, body size, height, sex, creatinine clearance (CrCL) and patient location [intensive care unit (ICU) or general medical unit] on pharmacokinetic (PK) parameters were evaluated. Monte Carlo simulations were used to (i) evaluate covariate effects on C-ss and (ii) compare the impact of different infusion rates on predicted C-ss. The best dose was selected based on the highest probability that the C-ss achieved would lie within the therapeutic range. Results A two-compartment linear model with additive and proportional residual error for general medical unit patients and only a proportional error for patients in ICU provided the best description of the data. Both CrCL and weight were found to affect significantly clearance and volume of distribution of the central compartment, respectively. Simulations suggested that the best doses for patients in the ICU setting were 50 IU kg(-1) per 12 h (4.2 IU kg(-1) h(-1)) if CrCL < 30 ml min(-1); 60 IU kg(-1) per 12 h (5.0 IU kg(-1) h(-1)) if CrCL was 30-50 ml min(-1); and 70 IU kg(-1) per 12 h (5.8 IU kg(-1) h(-1)) if CrCL > 50 ml min(-1). The best doses for patients in the general medical unit were 60 IU kg(-1) per 12 h (5.0 IU kg(-1) h(-1)) if CrCL < 30 ml min(-1); 70 IU kg(-1) per 12 h (5.8 IU kg(-1) h(-1)) if CrCL was 30-50 ml min(-1); and 100 IU kg(-1) per 12 h (8.3 IU kg(-1) h(-1)) if CrCL > 50 ml min(-1). These best doses were selected based on providing the lowest equal probability of either being above or below the therapeutic range and the highest probability that the C-ss achieved would lie within the therapeutic range. Conclusion The dose of enoxaparin should be individualized to the patients' renal function and weight. There is some evidence to support slightly lower doses of CII enoxaparin in patients in the ICU setting.
Resumo:
Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.
Resumo:
The planning and management of water resources in the Pioneer Valley, north-eastern Australia requires a tool for assessing the impact of groundwater and stream abstractions on water supply reliabilities and environmental flows in Sandy Creek (the main surface water system studied). Consequently, a fully coupled stream-aquifer model has been constructed using the code MODHMS, calibrated to near-stream observations of watertable behaviour and multiple components of gauged stream flow. This model has been tested using other methods of estimation, including stream depletion analysis and radon isotope tracer sampling. The coarseness of spatial discretisation, which is required for practical reasons of computational efficiency, limits the model's capacity to simulate small-scale processes (e.g., near-stream groundwater pumping, bank storage effects), and alternative approaches are required to complement the model's range of applicability. Model predictions of groundwater influx to Sandy Creek are compared with baseflow estimates from three different hydrograph separation techniques, which were found to be unable to reflect the dynamics of Sandy Creek stream-aquifer interactions. The model was also used to infer changes in the water balance of the system caused by historical land use change. This led to constraints on the recharge distribution which can be implemented to improve model calibration performance. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The complex nature of venom from spider species offers a unique natural source of potential pharmacological tools and therapeutic leads. The increased interest in spider venom molecules requires reproducible and precise identification methods. The current taxonomy of the Australian Funnel-web spiders is incomplete, and therefore, accurate identification of these spiders is difficult. Here, we present a study of venom from numerous morphologically similar specimens of the Hadronyche infensa species group collected from a variety of geographic locations in southeast Queensland. Analysis of the crude venoms using online reversed-phase high performance liquid chromatography/electrospray ionisation mass spectrometry (rp-HPLC/ESI-MS) revealed that the venom profiles provide a useful means of specimen identification, from the species level to species variants. Tables defining the descriptor molecules for each group of specimens were constructed and provided a quick reference of the relationship between one specimen and another. The study revealed that the morphologically similar specimens from the southeast Queensland region are a number of different species/species variants. Furthermore, the study supports aspects of the current taxonomy with respect to the H. infensa species group. Analysis of Australian Funnel-web spider venom by rp-HPLC/ESI-MS provides a rapid and accurate method of species/species variant identification. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
We are developing a telemedicine application which offers automated diagnosis of facial (Bell's) palsy through a Web service. We used a test data set of 43 images of facial palsy patients and 44 normal people to develop the automatic recognition algorithm. Three different image pre-processing methods were used. Machine learning techniques (support vector machine, SVM) were used to examine the difference between the two halves of the face. If there was a sufficient difference, then the SVM recognized facial palsy. Otherwise, if the halves were roughly symmetrical, the SVM classified the image as normal. It was found that the facial palsy images had a greater Hamming Distance than the normal images, indicating greater asymmetry. The median distance in the normal group was 331 (interquartile range 277-435) and the median distance in the facial palsy group was 509 (interquartile range 334-703). This difference was significant (P
Resumo:
An extended refraction-diffraction equation [Massel, S.R., 1993. Extended refraction-diffraction equation for surface waves. Coastal Eng. 19, 97-126] has been applied to predict wave transformation and breaking as well as wave-induced set-up on two-dimensional reef profiles of various shapes. A free empirical coefficient alpha in a formula for the average rate of energy dissipation [epsilon(b)] = (alpha rho g omega/8 pi)(root gh/C)(H-3/h) in the modified periodic bore model was found to be a function of the dimensionless parameter F-c0 = (g(1.25)H(0)(0.5)T(2.5))/h(r)(1.75), proposed by Gourlay [Gourlayl M.R., 1994. Wave transformation on a coral reef. Coastal Eng. 23, 17-42]. The applicability of the developed model has been demonstrated for reefs of various shapes subjected to various incident wave conditions. Assuming proposed relationships of the coefficient alpha and F-c0, the model provides results on wave height attenuation and set-up elevation which compare well with experimental data. (C) 2000 Elsevier Science B.V. All rights reserved.