72 resultados para density model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multiscale modeling is emerging as one of the key challenges in mathematical biology. However, the recent rapid increase in the number of modeling methodologies being used to describe cell populations has raised a number of interesting questions. For example, at the cellular scale, how can the appropriate discrete cell-level model be identified in a given context? Additionally, how can the many phenomenological assumptions used in the derivation of models at the continuum scale be related to individual cell behavior? In order to begin to address such questions, we consider a discrete one-dimensional cell-based model in which cells are assumed to interact via linear springs. From the discrete equations of motion, the continuous Rouse [P. E. Rouse, J. Chem. Phys. 21, 1272 (1953)] model is obtained. This formalism readily allows the definition of a cell number density for which a nonlinear "fast" diffusion equation is derived. Excellent agreement is demonstrated between the continuum and discrete models. Subsequently, via the incorporation of cell division, we demonstrate that the derived nonlinear diffusion model is robust to the inclusion of more realistic biological detail. In the limit of stiff springs, where cells can be considered to be incompressible, we show that cell velocity can be directly related to cell production. This assumption is frequently made in the literature but our derivation places limits on its validity. Finally, the model is compared with a model of a similar form recently derived for a different discrete cell-based model and it is shown how the different diffusion coefficients can be understood in terms of the underlying assumptions about cell behavior in the respective discrete models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mathematical model describing the uptake of low density lipoprotein (LDL) and very low density lipoprotein (VLDL) particles by a single hepatocyte cell is formulated and solved. The model includes a description of the dynamic change in receptor density on the surface of the cell due to the binding and dissociation of the lipoprotein particles, the subsequent internalisation of bound particles, receptors and unbound receptors, the recycling of receptors to the cell surface, cholesterol dependent de novo receptor formation by the cell and the effect that particle uptake has on the cell's overall cholesterol content. The effect that blocking access to LDL receptors by VLDL, or internalisation of VLDL particles containing different amounts of apolipoprotein E (we will refer to these particles as VLDL-2 and VLDL-3) has on LDL uptake is explored. By comparison with experimental data we find that measures of cell cholesterol content are important in differentiating between the mechanisms by which VLDL is thought to inhibit LDL uptake. We extend our work to show that in the presence of both types of VLDL particle (VLDL-2 and VLDL-3), measuring relative LDL uptake does not allow differentiation between the results of blocking and internalisation of each VLDL particle to be made. Instead by considering the intracellular cholesterol content it is found that internalisation of VLDL-2 and VLDL-3 leads to the highest intracellular cholesterol concentration. A sensitivity analysis of the model reveals that binding, unbinding and internalisation rates, the fraction of receptors recycled and the rate at which the cholesterol dependent free receptors are created by the cell have important implications for the overall uptake dynamics of either VLDL or LDL particles and subsequent intracellular cholesterol concentration. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the performance of phylogenetic mixture models in reducing a well-known and pervasive artifact of phylogenetic inference known as the node-density effect, comparing them to partitioned analyses of the same data. The node-density effect refers to the tendency for the amount of evolutionary change in longer branches of phylogenies to be underestimated compared to that in regions of the tree where there are more nodes and thus branches are typically shorter. Mixture models allow more than one model of sequence evolution to describe the sites in an alignment without prior knowledge of the evolutionary processes that characterize the data or how they correspond to different sites. If multiple evolutionary patterns are common in sequence evolution, mixture models may be capable of reducing node-density effects by characterizing the evolutionary processes more accurately. In gene-sequence alignments simulated to have heterogeneous patterns of evolution, we find that mixture models can reduce node-density effects to negligible levels or remove them altogether, performing as well as partitioned analyses based on the known simulated patterns. The mixture models achieve this without knowledge of the patterns that generated the data and even in some cases without specifying the full or true model of sequence evolution known to underlie the data. The latter result is especially important in real applications, as the true model of evolution is seldom known. We find the same patterns of results for two real data sets with evidence of complex patterns of sequence evolution: mixture models substantially reduced node-density effects and returned better likelihoods compared to partitioning models specifically fitted to these data. We suggest that the presence of more than one pattern of evolution in the data is a common source of error in phylogenetic inference and that mixture models can often detect these patterns even without prior knowledge of their presence in the data. Routine use of mixture models alongside other approaches to phylogenetic inference may often reveal hidden or unexpected patterns of sequence evolution and can improve phylogenetic inference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A size-structured plant population model is developed to study the evolution of pathogen-induced leaf shedding under various environmental conditions. The evolutionary stable strategy (ESS) of the leaf shedding rate is determined for two scenarios: i) a constant leaf shedding strategy and ii) an infection load driven leaf shedding strategy. The model predicts that ESS leaf shedding rates increase with nutrient availability. No effect of plant density on the ESS leaf shedding rate is found even though disease severity increases with plant density. When auto-infection, that is increased infection due to spores produced on the plant itself, plays a key role in further disease increase on the plant, shedding leaves removes disease that would otherwise contribute to disease increase on the plant itself. Consequently leaf shedding responses to infections may evolve. When external infection, that is infection due to immigrant spores, is the key determinant, shedding a leaf does not reduce the force of infection on the leaf shedding plant. In this case leaf shedding will not evolve. Under a low external disease pressure adopting an infection driven leaf shedding strategy is more efficient than adopting a constant leaf shedding strategy, since a plant adopting an infection driven leaf shedding strategy does not shed any leaves in the absence of infection, even when leaf shedding rates are high. A plant adopting a constant leaf shedding rate sheds the same amount of leaves regardless of the presence of infection. Based on the results we develop two hypotheses that can be tested if the appropriate plant material is available.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. We studied a reintroduced population of the formerly critically endangered Mauritius kestrel Falco punctatus Temmink from its inception in 1987 until 2002, by which time the population had attained carrying capacity for the study area. Post-1994 the population received minimal management other than the provision of nestboxes. 2. We analysed data collected on survival (1987-2002) using program MARK to explore the influence of density-dependent and independent processes on survival over the course of the population's development. 3.We found evidence for non-linear, threshold density dependence in juvenile survival rates. Juvenile survival was also strongly influenced by climate, with the temporal distribution of rainfall during the cyclone season being the most influential climatic variable. Adult survival remained constant throughout. 4. Our most parsimonious capture-mark-recapture statistical model, which was constrained by density and climate, explained 75.4% of the temporal variation exhibited in juvenile survival rates over the course of the population's development. 5. This study is an example of how data collected as part of a threatened species recovery programme can be used to explore the role and functional form of natural population regulatory processes. With the improvements in conservation management techniques and the resulting success stories, formerly threatened species offer unique opportunities to further our understanding of the fundamental principles of population ecology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inelastic neutron scattering spectroscopy has been used to observe and characterise hydrogen on the carbon component of a Pt/C catalyst. INS provides the complete vibration spectrum of coronene, regarded as a molecular model of a graphite layer. The vibrational modes are assigned with the aid of ab initio density functional theory calculations and the INS spectra by the a-CLIMAX program. A spectrum for which the H modes of coronene have been computationally suppressed, a carbon-only coronene spectrum, is a better representation of the spectrum of a graphite layer than is coronene itself. Dihydrogen dosing of a Pt/C catalyst caused amplification of the surface modes of carbon, an effect described as H riding on carbon. From the enhancement of the low energy carbon modes (100-600 cm(-1)) it is concluded that spillover hydrogen becomes attached to dangling bonds at the edges of graphitic regions of the carbon support. (C) 2003 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fermentation system was designed to model the human colonic microflora in vitro. The system provided a framework of mucin beads to encourage the adhesion of bacteria, which was encased within a dialysis membrane. The void between the beads was inoculated with faeces from human donors. Water and metabolites were removed from the fermentation by osmosis using a solution of polyethylene glycol (PEG). The system was concomitantly inoculated alongside a conventional single-stage chemostat. Three fermentations were carried out using inocula from three healthy human donors. Bacterial populations from the chemostat and biofilm system were enumerated using fluorescence in situ hybridization. The culture fluid was also analysed for its short-chain fatty acid (SCFA) content. A higher cell density was achieved in the biofilm fermentation system (taking into account the contribution made by the bead-associated bacteria) as compared with the chemostat, owing to the removal of water and metabolites. Evaluation of the bacterial populations revealed that the biofilm system was able to support two distinct groups of bacteria: bacteria growing in association with the mucin beads and planktonic bacteria in the culture fluid. Furthermore, distinct differences were observed between populations in the biofilm fermenter system and the chemostat, with the former supporting higher populations of clostridia and Escherichia coli. SCFA levels were lower in the biofilm system than in the chemostat, as in the former they were removed via the osmotic effect of the PEG. These experiments demonstrated the potential usefulness of the biofilm system for investigating the complexity of the human colonic microflora and the contribution made by sessile bacterial populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To compare insulin sensitivity (Si) from a frequently sampled intravenous glucose tolerance test (FSIGT) and subsequent minimal model analyses with surrogate measures of insulin sensitivity and resistance and to compare features of the metabolic syndrome between Caucasians and Indian Asians living in the UK. SUBJECTS: In all, 27 healthy male volunteers (14 UK Caucasians and 13 UK Indian Asians), with a mean age of 51.2 +/- 1.5 y, BMI of 25.8 +/- 0.6 kg/m(2) and Si of 2.85 +/- 0.37. MEASUREMENTS: Si was determined from an FSIGT with subsequent minimal model analysis. The concentrations of insulin, glucose and nonesterified fatty acids (NEFA) were analysed in fasting plasma and used to calculate surrogate measure of insulin sensitivity (quantitative insulin sensitivity check index (QUICKI), revised QUICKI) and resistance (homeostasis for insulin resistance (HOMA IR), fasting insulin resistance index (FIRI), Bennetts index, fasting insulin, insulin-to-glucose ratio). Plasma concentrations of triacylglycerol (TAG), total cholesterol, high density cholesterol, (HDL-C) and low density cholesterol, (LDL-C) were also measured in the fasted state. Anthropometric measurements were conducted to determine body-fat distribution. RESULTS: Correlation analysis identified the strongest relationship between Si and the revised QUICKI (r = 0.67; P = 0.000). Significant associations were also observed between Si and QUICKI (r = 0.51; P = 0.007), HOMA IR (r = -0.50; P = 0.009), FIRI and fasting insulin. The Indian Asian group had lower HDL-C (P = 0.001), a higher waist-hip ratio (P = 0.01) and were significantly less insulin sensitive (Si) than the Caucasian group (P = 0.02). CONCLUSION: The revised QUICKI demonstrated a statistically strong relationship with the minimal model. However, it was unable to differentiate between insulin-sensitive and -resistant groups in this study. Future larger studies in population groups with varying degrees of insulin sensitivity are recommended to investigate the general applicability of the revised QUICKI surrogate technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the classical Parzen window estimate as the target function, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel density estimate with comparable accuracy to that of the full-sample optimised Parzen window density estimate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an efficient construction algorithm for obtaining sparse kernel density estimates based on a regression approach that directly optimizes model generalization capability. Computational efficiency of the density construction is ensured using an orthogonal forward regression, and the algorithm incrementally minimizes the leave-one-out test score. A local regularization method is incorporated naturally into the density construction process to further enforce sparsity. An additional advantage of the proposed algorithm is that it is fully automatic and the user is not required to specify any criterion to terminate the density construction procedure. This is in contrast to an existing state-of-art kernel density estimation method using the support vector machine (SVM), where the user is required to specify some critical algorithm parameter. Several examples are included to demonstrate the ability of the proposed algorithm to effectively construct a very sparse kernel density estimate with comparable accuracy to that of the full sample optimized Parzen window density estimate. Our experimental results also demonstrate that the proposed algorithm compares favorably with the SVM method, in terms of both test accuracy and sparsity, for constructing kernel density estimates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the classical Parzen window (PW) estimate as the desired response, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density (SKD) estimates. The proposed algorithm incrementally minimises a leave-one-out test score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights of the selected sparse model are finally updated using the multiplicative nonnegative quadratic programming algorithm, which ensures the nonnegative and unity constraints for the kernel weights and has the desired ability to reduce the model size further. Except for the kernel width, the proposed method has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Several examples demonstrate the ability of this simple regression-based approach to effectively construct a SKID estimate with comparable accuracy to that of the full-sample optimised PW density estimate. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the classical Parzen window (PW) estimate as the target function, the sparse kernel density estimator is constructed in a forward-constrained regression (FCR) manner. The proposed algorithm selects significant kernels one at a time, while the leave-one-out (LOO) test score is minimized subject to a simple positivity constraint in each forward stage. The model parameter estimation in each forward stage is simply the solution of jackknife parameter estimator for a single parameter, subject to the same positivity constraint check. For each selected kernels, the associated kernel width is updated via the Gauss-Newton method with the model parameter estimate fixed. The proposed approach is simple to implement and the associated computational cost is very low. Numerical examples are employed to demonstrate the efficacy of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.