47 resultados para Simulation studies
em CentAUR: Central Archive University of Reading - UK
Resumo:
A combined mathematical model for predicting heat penetration and microbial inactivation in a solid body heated by conduction was tested experimentally by inoculating agar cylinders with Salmonella typhimurium or Enterococcus faecium and heating in a water bath. Regions of growth where bacteria had survived after heating were measured by image analysis and compared with model predictions. Visualisation of the regions of growth was improved by incorporating chromogenic metabolic indicators into the agar. Preliminary tests established that the model performed satisfactorily with both test organisms and with cylinders of different diameter. The model was then used in simulation studies in which the parameters D, z, inoculum size, cylinder diameter and heating temperature were systematically varied. These simulations showed that the biological variables D, z and inoculum size had a relatively small effect on the time needed to eliminate bacteria at the cylinder axis in comparison with the physical variables heating temperature and cylinder diameter, which had a much greater relative effect. (c) 2005 Elsevier B.V All rights reserved.
Resumo:
The computer simulation method has been used to study the structural formation and transition of electro-magneto-rheological (EMR) fluids under compatible electric and magnetic fields. When the fields are applied simultaneously and perpendicularly to each other, the particles rapidly arrange into two-dimensional close-packed layer structures parallel to both fields. The layers then combine together to form thicker sheet-like structures, which finally relax into three-dimensional close-packed structures with the help of the thermal fluctuations. On the other hand, if the electric field is applied firstly to induce the body-centered tetragonal (BCT) columns in the system, and then the magnetic field is applied in the perpendicular direction. the BCT to face-centered cubic (FCC) structure transition is observed in very short time. Following that. the structure keeps on evolving due to the demagnetization effect and finally form the three-dimensional close-packed structures.
Resumo:
Polycondensation of 2,6-dihydroxynaphthalene with 4,4'-bis(4"-fluorobenzoyl)biphenyl affords a novel, semicrystalline poly(ether ketone) with a melting point of 406 degreesC and glass transition temperature (onset) of 168 degreesC. Molecular modeling and diffraction-simulation studies of this polymer, coupled with data from the single-crystal structure of an oligomer model, have enabled the crystal and molecular structure of the polymer to be determined from X-ray powder data. This structure-the first for any naphthalene-containing poly(ether ketone)-is fully ordered, in monoclinic space group P2(1)/b, with two chains per unit cell. Rietveld refinement against the experimental powder data gave a final agreement factor (R-wp) of 6.7%.
Resumo:
We investigated the effect of morphological differences on neuronal firing behavior within the hippocampal CA3 pyramidal cell family by using three-dimensional reconstructions of dendritic morphology in computational simulations of electrophysiology. In this paper, we report for the first time that differences in dendritic structure within the same morphological class can have a dramatic influence on the firing rate and firing mode (spiking versus bursting and type of bursting). Our method consisted of converting morphological measurements from three-dimensional neuroanatomical data of CA3 pyramidal cells into a computational simulator format. In the simulation, active channels were distributed evenly across the cells so that the electrophysiological differences observed in the neurons would only be due to morphological differences. We found that differences in the size of the dendritic tree of CA3 pyramidal cells had a significant qualitative and quantitative effect on the electrophysiological response. Cells with larger dendritic trees: (1) had a lower burst rate, but a higher spike rate within a burst, (2) had higher thresholds for transitions from quiescent to bursting and from bursting to regular spiking and (3) tended to burst with a plateau. Dendritic tree size alone did not account for all the differences in electrophysiological responses. Differences in apical branching, such as the distribution of branch points and terminations per branch order, appear to effect the duration of a burst. These results highlight the importance of considering the contribution of morphology in electrophysiological and simulation studies.
Resumo:
MD simulation studies showing the influence of porosity and carbon surface oxidation on phenol adsorption from aqueous solutions on carbons are reported. Based on a realistic model of activated carbon, three carbon structures with gradually changed microporosity were created. Next, a different number of surface oxygen groups was introduced. The pores with diameters around 0.6 nm are optimal for phenol adsorption and after the introduction of surface oxygen functionalities, adsorption of phenol decreases (in accordance with experimental data) for all studied models. This decrease is caused by a pore blocking effect due to the saturation of surface oxygen groups by highly hydrogen-bounded water molecules.
Resumo:
OBJECTIVES: The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. METHODS: To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. RESULTS: To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. CONCLUSIONS: Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.
Resumo:
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.
Resumo:
The purpose of this study was to improve the prediction of the quantity and type of Volatile Fatty Acids (VFA) produced from fermented substrate in the rumen of lactating cows. A model was formulated that describes the conversion of substrate (soluble carbohydrates, starch, hemi-cellulose, cellulose, and protein) into VFA (acetate, propionate, butyrate, and other VFA). Inputs to the model were observed rates of true rumen digestion of substrates, whereas outputs were observed molar proportions of VFA in rumen fluid. A literature survey generated data of 182 diets (96 roughage and 86 concentrate diets). Coefficient values that define the conversion of a specific substrate into VFA were estimated meta-analytically by regression of the model against observed VFA molar proportions using non-linear regression techniques. Coefficient estimates significantly differed for acetate and propionate production in particular, between different types of substrate and between roughage and concentrate diets. Deviations of fitted from observed VFA molar proportions could be attributed to random error for 100%. In addition to regression against observed data, simulation studies were performed to investigate the potential of the estimation method. Fitted coefficient estimates from simulated data sets appeared accurate, as well as fitted rates of VFA production, although the model accounted for only a small fraction (maximally 45%) of the variation in VFA molar proportions. The simulation results showed that the latter result was merely a consequence of the statistical analysis chosen and should not be interpreted as an indication of inaccuracy of coefficient estimates. Deviations between fitted and observed values corresponded to those obtained in simulations. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
In a sequential clinical trial, accrual of data on patients often continues after the stopping criterion for the study has been met. This is termed “overrunning.” Overrunning occurs mainly when the primary response from each patient is measured after some extended observation period. The objective of this article is to compare two methods of allowing for overrunning. In particular, simulation studies are reported that assess the two procedures in terms of how well they maintain the intended type I error rate. The effect on power resulting from the incorporation of “overrunning data” using the two procedures is evaluated.
Resumo:
A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
In conventional phylogeographic studies, historical demographic processes are elucidated from the geographical distribution of individuals represented on an inferred gene tree. However, the interpretation of gene trees in this context can be difficult as the same demographic/geographical process can randomly lead to multiple different genealogies. Likewise, the same gene trees can arise under different demographic models. This problem has led to the emergence of many statistical methods for making phylogeographic inferences. A popular phylogeographic approach based on nested clade analysis is challenged by the fact that a certain amount of the interpretation of the data is left to the subjective choices of the user, and it has been argued that the method performs poorly in simulation studies. More rigorous statistical methods based on coalescence theory have been developed. However, these methods may also be challenged by computational problems or poor model choice. In this review, we will describe the development of statistical methods in phylogeographic analysis, and discuss some of the challenges facing these methods.
Resumo:
We focus on the comparison of three statistical models used to estimate the treatment effect in metaanalysis when individually pooled data are available. The models are two conventional models, namely a multi-level and a model based upon an approximate likelihood, and a newly developed model, the profile likelihood model which might be viewed as an extension of the Mantel-Haenszel approach. To exemplify these methods, we use results from a meta-analysis of 22 trials to prevent respiratory tract infections. We show that by using the multi-level approach, in the case of baseline heterogeneity, the number of clusters or components is considerably over-estimated. The approximate and profile likelihood method showed nearly the same pattern for the treatment effect distribution. To provide more evidence two simulation studies are accomplished. The profile likelihood can be considered as a clear alternative to the approximate likelihood model. In the case of strong baseline heterogeneity, the profile likelihood method shows superior behaviour when compared with the multi-level model. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
Analysis of X-ray powder data for the melt-crystallisable aromatic poly(thioether thioether ketone) [-S-Ar-S-Ar-CO-Ar](n), ('PTTK', Ar= 1,4-phenylene), reveals that it adopts a crystal structure very different from that established for its ether-analogue PEEK. Molecular modelling and diffraction-simulation studies of PTTK show that the structure of this polymer is analogous to that of melt-crystallised poly(thioetherketone) [-SAr-CO-Ar](n) in which the carbonyl linkages in symmetry-related chains are aligned anti-parallel to one another. and that these bridging units are crystallographically interchangeable. The final model for the crystal structure of PTTK is thus disordered, in the monoclinic space group 121a (two chains per unit cell), with cell dimensions a = 7.83, b = 6.06, c = 10.35 angstrom, beta = 93.47 degrees. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The work reported in this paper is motivated by biomimetic inspiration - the transformation of patterns. The major issue addressed is the development of feasible methods for transformation based on a macroscopic tool. The general requirement for the feasibility of the transformation method is determined by classifying pattern formation approaches an their characteristics. A formal definition for pattern transformation is provided and four special cases namely, elementary and geometric transformation based on repositioning all and some robotic agents are introduced. A feasible method for transforming patterns geometrically, based on the macroscopic parameter operation of a swarm is considered. The transformation method is applied to a swarm model which lends itself to the transformation technique. Simulation studies are developed to validate the feasibility of the approach, and do indeed confirm the approach.
Resumo:
Estimation of a population size by means of capture-recapture techniques is an important problem occurring in many areas of life and social sciences. We consider the frequencies of frequencies situation, where a count variable is used to summarize how often a unit has been identified in the target population of interest. The distribution of this count variable is zero-truncated since zero identifications do not occur in the sample. As an application we consider the surveillance of scrapie in Great Britain. In this case study holdings with scrapie that are not identified (zero counts) do not enter the surveillance database. The count variable of interest is the number of scrapie cases per holding. For count distributions a common model is the Poisson distribution and, to adjust for potential heterogeneity, a discrete mixture of Poisson distributions is used. Mixtures of Poissons usually provide an excellent fit as will be demonstrated in the application of interest. However, as it has been recently demonstrated, mixtures also suffer under the so-called boundary problem, resulting in overestimation of population size. It is suggested here to select the mixture model on the basis of the Bayesian Information Criterion. This strategy is further refined by employing a bagging procedure leading to a series of estimates of population size. Using the median of this series, highly influential size estimates are avoided. In limited simulation studies it is shown that the procedure leads to estimates with remarkable small bias.