918 resultados para COMPUTER-SIMULATION
Resumo:
The mechanisms involved in the recognition of microbial pathogens and activation of the immune system have been extensively studied. However, the mechanisms involved in the recovery phase of an infection are incompletely characterized at both the cellular and physiological levels. Here, we establish a Caenorhabditis elegans-Salmonella enterica model of acute infection and antibiotic treatment for studying biological changes during the resolution phase of an infection. Using whole genome expression profiles of acutely infected animals, we found that genes that are markers of innate immunity are down-regulated upon recovery, while genes involved in xenobiotic detoxification, redox regulation, and cellular homeostasis are up-regulated. In silico analyses demonstrated that genes altered during recovery from infection were transcriptionally regulated by conserved transcription factors, including GATA/ELT-2, FOXO/DAF-16, and Nrf/SKN-1. Finally, we found that recovery from an acute bacterial infection is dependent on ELT-2 activity.
Resumo:
Determining how information flows along anatomical brain pathways is a fundamental requirement for understanding how animals perceive their environments, learn, and behave. Attempts to reveal such neural information flow have been made using linear computational methods, but neural interactions are known to be nonlinear. Here, we demonstrate that a dynamic Bayesian network (DBN) inference algorithm we originally developed to infer nonlinear transcriptional regulatory networks from gene expression data collected with microarrays is also successful at inferring nonlinear neural information flow networks from electrophysiology data collected with microelectrode arrays. The inferred networks we recover from the songbird auditory pathway are correctly restricted to a subset of known anatomical paths, are consistent with timing of the system, and reveal both the importance of reciprocal feedback in auditory processing and greater information flow to higher-order auditory areas when birds hear natural as opposed to synthetic sounds. A linear method applied to the same data incorrectly produces networks with information flow to non-neural tissue and over paths known not to exist. To our knowledge, this study represents the first biologically validated demonstration of an algorithm to successfully infer neural information flow networks.
Resumo:
INTRODUCTION: We previously reported models that characterized the synergistic interaction between remifentanil and sevoflurane in blunting responses to verbal and painful stimuli. This preliminary study evaluated the ability of these models to predict a return of responsiveness during emergence from anesthesia and a response to tibial pressure when patients required analgesics in the recovery room. We hypothesized that model predictions would be consistent with observed responses. We also hypothesized that under non-steady-state conditions, accounting for the lag time between sevoflurane effect-site concentration (Ce) and end-tidal (ET) concentration would improve predictions. METHODS: Twenty patients received a sevoflurane, remifentanil, and fentanyl anesthetic. Two model predictions of responsiveness were recorded at emergence: an ET-based and a Ce-based prediction. Similarly, 2 predictions of a response to noxious stimuli were recorded when patients first required analgesics in the recovery room. Model predictions were compared with observations with graphical and temporal analyses. RESULTS: While patients were anesthetized, model predictions indicated a high likelihood that patients would be unresponsive (> or = 99%). However, after termination of the anesthetic, models exhibited a wide range of predictions at emergence (1%-97%). Although wide, the Ce-based predictions of responsiveness were better distributed over a percentage ranking of observations than the ET-based predictions. For the ET-based model, 45% of the patients awoke within 2 min of the 50% model predicted probability of unresponsiveness and 65% awoke within 4 min. For the Ce-based model, 45% of the patients awoke within 1 min of the 50% model predicted probability of unresponsiveness and 85% awoke within 3.2 min. Predictions of a response to a painful stimulus in the recovery room were similar for the Ce- and ET-based models. DISCUSSION: Results confirmed, in part, our study hypothesis; accounting for the lag time between Ce and ET sevoflurane concentrations improved model predictions of responsiveness but had no effect on predicting a response to a noxious stimulus in the recovery room. These models may be useful in predicting events of clinical interest but large-scale evaluations with numerous patients are needed to better characterize model performance.
Resumo:
Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.
Resumo:
MOTIVATION: Technological advances that allow routine identification of high-dimensional risk factors have led to high demand for statistical techniques that enable full utilization of these rich sources of information for genetics studies. Variable selection for censored outcome data as well as control of false discoveries (i.e. inclusion of irrelevant variables) in the presence of high-dimensional predictors present serious challenges. This article develops a computationally feasible method based on boosting and stability selection. Specifically, we modified the component-wise gradient boosting to improve the computational feasibility and introduced random permutation in stability selection for controlling false discoveries. RESULTS: We have proposed a high-dimensional variable selection method by incorporating stability selection to control false discovery. Comparisons between the proposed method and the commonly used univariate and Lasso approaches for variable selection reveal that the proposed method yields fewer false discoveries. The proposed method is applied to study the associations of 2339 common single-nucleotide polymorphisms (SNPs) with overall survival among cutaneous melanoma (CM) patients. The results have confirmed that BRCA2 pathway SNPs are likely to be associated with overall survival, as reported by previous literature. Moreover, we have identified several new Fanconi anemia (FA) pathway SNPs that are likely to modulate survival of CM patients. AVAILABILITY AND IMPLEMENTATION: The related source code and documents are freely available at https://sites.google.com/site/bestumich/issues. CONTACT: yili@umich.edu.
Resumo:
During bacterial growth, a cell approximately doubles in size before division, after which it splits into two daughter cells. This process is subjected to the inherent perturbations of cellular noise and thus requires regulation for cell-size homeostasis. The mechanisms underlying the control and dynamics of cell size remain poorly understood owing to the difficulty in sizing individual bacteria over long periods of time in a high-throughput manner. Here we measure and analyse long-term, single-cell growth and division across different Escherichia coli strains and growth conditions. We show that a subset of cells in a population exhibit transient oscillations in cell size with periods that stretch across several (more than ten) generations. Our analysis reveals that a simple law governing cell-size control-a noisy linear map-explains the origins of these cell-size oscillations across all strains. This noisy linear map implements a negative feedback on cell-size control: a cell with a larger initial size tends to divide earlier, whereas one with a smaller initial size tends to divide later. Combining simulations of cell growth and division with experimental data, we demonstrate that this noisy linear map generates transient oscillations, not just in cell size, but also in constitutive gene expression. Our work provides new insights into the dynamics of bacterial cell-size regulation with implications for the physiological processes involved.
Resumo:
Our research was conducted to improve the timeliness, coordination, and communication during the detection, investigation and decision-making phases of the response to an aerosolized anthrax attack in the metropolitan Washington, DC, area with the goal of reducing casualties. Our research gathered information of the current response protocols through an extensive literature review and interviews with relevant officials and experts in order to identify potential problems that may exist in various steps of the detection, investigation, and response. Interviewing officials from private and government sector agencies allowed the development of a set of models of interactions and a communication network to identify discrepancies and redundancies that would elongate the delay time in initiating a public health response. In addition, we created a computer simulation designed to model an aerosol spread using weather patterns and population density to identify an estimated population of infected individuals within a target region depending on the virulence and dimensions of the weaponized spores. We developed conceptual models in order to design recommendations that would be presented to our collaborating contacts and agencies that would use such policy and analysis interventions to improve upon the overall response to an aerosolized anthrax attack, primarily through changes to emergency protocol functions and suggestions of technological detection and monitoring response to an aerosolized anthrax attack.
Resumo:
We recently developed an approach for testing the accuracy of network inference algorithms by applying them to biologically realistic simulations with known network topology. Here, we seek to determine the degree to which the network topology and data sampling regime influence the ability of our Bayesian network inference algorithm, NETWORKINFERENCE, to recover gene regulatory networks. NETWORKINFERENCE performed well at recovering feedback loops and multiple targets of a regulator with small amounts of data, but required more data to recover multiple regulators of a gene. When collecting the same number of data samples at different intervals from the system, the best recovery was produced by sampling intervals long enough such that sampling covered propagation of regulation through the network but not so long such that intervals missed internal dynamics. These results further elucidate the possibilities and limitations of network inference based on biological data.
Resumo:
MOTIVATION: Although many network inference algorithms have been presented in the bioinformatics literature, no suitable approach has been formulated for evaluating their effectiveness at recovering models of complex biological systems from limited data. To overcome this limitation, we propose an approach to evaluate network inference algorithms according to their ability to recover a complex functional network from biologically reasonable simulated data. RESULTS: We designed a simulator to generate data representing a complex biological system at multiple levels of organization: behaviour, neural anatomy, brain electrophysiology, and gene expression of songbirds. About 90% of the simulated variables are unregulated by other variables in the system and are included simply as distracters. We sampled the simulated data at intervals as one would sample from a biological system in practice, and then used the sampled data to evaluate the effectiveness of an algorithm we developed for functional network inference. We found that our algorithm is highly effective at recovering the functional network structure of the simulated system-including the irrelevance of unregulated variables-from sampled data alone. To assess the reproducibility of these results, we tested our inference algorithm on 50 separately simulated sets of data and it consistently recovered almost perfectly the complex functional network structure underlying the simulated data. To our knowledge, this is the first approach for evaluating the effectiveness of functional network inference algorithms at recovering models from limited data. Our simulation approach also enables researchers a priori to design experiments and data-collection protocols that are amenable to functional network inference.
Resumo:
The experiments in the Cole and Moore article in the first issue of the Biophysical Journal provided the first independent experimental confirmation of the Hodgkin-Huxley (HH) equations. A log-log plot of the K current versus time showed that raising the HH variable n to the sixth power provided the best fit to the data. Subsequent simulations using n(6) and setting the resting potential at the in vivo value simplifies the HH equations by eliminating the leakage term. Our article also reported that the K current in response to a depolarizing step to ENa was delayed if the step was preceded by a hyperpolarization. While the interpretation of this phenomenon in the article was flawed, subsequent simulations show that the effect completely arises from the original HH equations.
Resumo:
In this paper a computer simulation tool capable of modelling multi-physics processes in complex geometry has been developed and applied to the casting process. The quest for high-quality complex casting components demanded by the aerospace and automobile industries, requires more precise numerical modelling techniques and one that need to be generic and modular in its approach to modelling multi-processes problems. For such a computer model to be successful in shape casting, the complete casting process needs to be addressed, the major events being:-• Filling of hot liquid metal into a cavity mould • Solidification and latent heat evolution of liquid metal • Convection currents generated in liquid metal by thermal gradients • Deformation of cast and stress development in solidified metal • Macroscopic porosity formation The above phenomena combines the analysis of fluid flow, heat transfer, change of phase and thermal stress development. None of these events can be treated in isolation as they inexorably interact with each other in a complex way. Also conditions such as design of running system, location of feeders and chills, moulding materials and types of boundary conditions can all affect on the final cast quality and must be appropriately represented in the model.
Resumo:
The International Maritime Organisation (IMO) has adopted the use of computer simulation to assist in the assessment of the assembly time for passenger ships. A key parameter required for this analysis and specified as part of the IMO guidelines is the passenger response time distribution. It is demonstrated in this paper that the IMO specified response time distribution assumes an unrealistic mathematical form. This unrealistic mathematical form can lead to serious congestion issues being overlooked in the evacuation analysis and lead to incorrect conclusions concerning the suitability of vessel design. In light of these results, it is vital that IMO undertake research to generate passenger response time data suitable for use in evacuation analysis of passenger ships. Until this type of data becomes readily available, it is strongly recommended that rather than continuing to use the artificial and unrepresentative form of the response time distribution, IMO should adopt plausible and more realistic response time data derived from land based applications. © 2005: Royal Institution of Naval Architects.
Resumo:
We review the current state of the art in EELS fingerprinting by computer simulation, focusing on the bandstructure approach to the problem. Currently calculations are made using a one electron theory, but we describe in principle the way to go beyond this to include final state effects. We include these effects within the one electron framework using the Slater transition state formula and assess the errors involved. Two examples are then given which illustrate the use of the one electron approximation within density functional theory. Our approach is to combine predicted atomic structure with predicted electronic structure to assist in fingerprinting of complex crystal structures.
Resumo:
Structural and magnetic properties of thin Mn films on the Fe(001) surface have been investigated by a combination of photoelectron spectroscopy and computer simulation in the temperature range 300 Kless than or equal toTless than or equal to750 K. Room-temperature as deposited Mn overlayers are found to be ferromagnetic up to 2.5-monolayer (ML) coverage, with a magnetic moment parallel to that of the iron substrate. The Mn atomic moment decreases with increasing coverage, and thicker samples (4-ML and 4.5-ML coverage) are antiferromagnetic. Photoemission measurements performed while the system temperature is rising at constant rate (dT/dtsimilar to0.5 K/s) detect the first signs of Mn-Fe interdiffusion at T=450 K, and reveal a broad temperature range (610 Kless than or equal toTless than or equal to680 K) in which the interface appears to be stable. Interdiffusion resumes at Tgreater than or equal to680 K. Molecular dynamics and Monte Carlo simulations allow us to attribute the stability plateau at 610 Kless than or equal toTless than or equal to680 K to the formation of a single-layer MnFe surface alloy with a 2x2 unit cell and a checkerboard distribution of Mn and Fe atoms. X-ray-absorption spectroscopy and analysis of the dichroic signal show that the alloy has a ferromagnetic spin structure, collinear with that of the substrate. The magnetic moments of Mn and Fe atoms in the alloy are estimated to be 0.8mu(B) and 1.1mu(B), respectively.
Resumo:
This study investigates the superposition-based cooperative transmission system. In this system, a key point is for the relay node to detect data transmitted from the source node. This issued was less considered in the existing literature as the channel is usually assumed to be flat fading and a priori known. In practice, however, the channel is not only a priori unknown but subject to frequency selective fading. Channel estimation is thus necessary. Of particular interest is the channel estimation at the relay node which imposes extra requirement for the system resources. The authors propose a novel turbo least-square channel estimator by exploring the superposition structure of the transmission data. The proposed channel estimator not only requires no pilot symbols but also has significantly better performance than the classic approach. The soft-in-soft-out minimum mean square error (MMSE) equaliser is also re-derived to match the superimposed data structure. Finally computer simulation results are shown to verify the proposed algorithm.