899 resultados para Simulation analysis
Resumo:
BACKGROUND: The ambition of most molecular biologists is the understanding of the intricate network of molecular interactions that control biological systems. As scientists uncover the components and the connectivity of these networks, it becomes possible to study their dynamical behavior as a whole and discover what is the specific role of each of their components. Since the behavior of a network is by no means intuitive, it becomes necessary to use computational models to understand its behavior and to be able to make predictions about it. Unfortunately, most current computational models describe small networks due to the scarcity of kinetic data available. To overcome this problem, we previously published a methodology to convert a signaling network into a dynamical system, even in the total absence of kinetic information. In this paper we present a software implementation of such methodology. RESULTS: We developed SQUAD, a software for the dynamic simulation of signaling networks using the standardized qualitative dynamical systems approach. SQUAD converts the network into a discrete dynamical system, and it uses a binary decision diagram algorithm to identify all the steady states of the system. Then, the software creates a continuous dynamical system and localizes its steady states which are located near the steady states of the discrete system. The software permits to make simulations on the continuous system, allowing for the modification of several parameters. Importantly, SQUAD includes a framework for perturbing networks in a manner similar to what is performed in experimental laboratory protocols, for example by activating receptors or knocking out molecular components. Using this software we have been able to successfully reproduce the behavior of the regulatory network implicated in T-helper cell differentiation. CONCLUSION: The simulation of regulatory networks aims at predicting the behavior of a whole system when subject to stimuli, such as drugs, or determine the role of specific components within the network. The predictions can then be used to interpret and/or drive laboratory experiments. SQUAD provides a user-friendly graphical interface, accessible to both computational and experimental biologists for the fast qualitative simulation of large regulatory networks for which kinetic data is not necessarily available.
Resumo:
High-throughput technologies are now used to generate more than one type of data from the same biological samples. To properly integrate such data, we propose using co-modules, which describe coherent patterns across paired data sets, and conceive several modular methods for their identification. We first test these methods using in silico data, demonstrating that the integrative scheme of our Ping-Pong Algorithm uncovers drug-gene associations more accurately when considering noisy or complex data. Second, we provide an extensive comparative study using the gene-expression and drug-response data from the NCI-60 cell lines. Using information from the DrugBank and the Connectivity Map databases we show that the Ping-Pong Algorithm predicts drug-gene associations significantly better than other methods. Co-modules provide insights into possible mechanisms of action for a wide range of drugs and suggest new targets for therapy
Resumo:
Objectives: Gentamicin is one of the most commonly prescribed antibiotics for suspected or proven infection in newborns. Because of age-associated (pre- and post- natal) changes in body composition and organ function, large interindividual variability in gentamicin drug levels exists, thus requiring a close monitoring of this drug due to its narrow therapeutic index. We aimed to investigate clinical and demographic factors influencing gentamicin pharmacokinetics (PK) in a large cohort of unselected newborns and to explore optimal regimen based on simulation. Methods: All gentamicin concentration data from newborns treated at the University Hospital Center of Lausanne between December 2006 and October 2011 were retrieved. Gentamicin concentrations were measured within the frame of a routine therapeutic drug monitoring program, in which 2 concentrations (at 1h and 12h) are systematically collected after the first administered dose, and a few additional concentrations are sampled along the treatment course. A population PK analysis was performed by comparing various structural models, and the effect of clinical and demographic factors on gentamicin disposition was explored using NONMEM®. Results: A total of 3039 concentrations collected in 994 preterm (median gestational age 32.3 weeks, range 24.2-36.5 weeks) and 455 term newborns were used in the analysis. Most of the data (86%) were sampled after the first dose (C1 h and C12 h). A two-compartment model best characterized gentamicin PK. Average clearance (CL) was 0.044 L/h/kg (CV 25%), central volume of distribution (Vc) 0.442 L/kg (CV 18%), intercompartmental clearance (Q) 0.040 L/h/kg and peripheral volume of distribution (Vp) 0.122 L/kg. Body weight, gestational age and postnatal age positively influenced CL. The use of both gestational age and postnatal age better predicted CL than postmenstrual age alone. CL was affected by dopamine and furosemide administration and non-significantly by indometacin. Body weight, gestational age and dopamine coadminstration significantly influenced Vc. Model based simulation confirms that preterm infants need higher dose, superior to 4 mg/kg, and extended interval dosage regimen to achieve adequate concentration. Conclusions: This study, performed on a very large cohort of neonates, identified important factors influencing gentamicin PK. The model will serve to elaborate a Bayesian tool for dosage individualization based on a single measurement.
Resumo:
In South America, yellow fever (YF) is an established infectious disease that has been identified outside of its traditional endemic areas, affecting human and nonhuman primate (NHP) populations. In the epidemics that occurred in Argentina between 2007-2009, several outbreaks affecting humans and howler monkeys (Alouatta spp) were reported, highlighting the importance of this disease in the context of conservation medicine and public health policies. Considering the lack of information about YF dynamics in New World NHP, our main goal was to apply modelling tools to better understand YF transmission dynamics among endangered brown howler monkey (Alouatta guariba clamitans) populations in northeastern Argentina. Two complementary modelling tools were used to evaluate brown howler population dynamics in the presence of the disease: Vortex, a stochastic demographic simulation model, and Outbreak, a stochastic disease epidemiology simulation. The baseline model of YF disease epidemiology predicted a very high probability of population decline over the next 100 years. We believe the modelling approach discussed here is a reasonable description of the disease and its effects on the howler monkey population and can be useful to support evidence-based decision-making to guide actions at a regional level.
Resumo:
Our new simple method for calculating accurate Franck-Condon factors including nondiagonal (i.e., mode-mode) anharmonic coupling is used to simulate the C2H4+X2B 3u←C2H4X̃1 Ag band in the photoelectron spectrum. An improved vibrational basis set truncation algorithm, which permits very efficient computations, is employed. Because the torsional mode is highly anharmonic it is separated from the other modes and treated exactly. All other modes are treated through the second-order perturbation theory. The perturbation-theory corrections are significant and lead to a good agreement with experiment, although the separability assumption for torsion causes the C2 D4 results to be not as good as those for C2 H4. A variational formulation to overcome this circumstance, and deal with large anharmonicities in general, is suggested
Resumo:
Earthquakes occurring around the world each year cause thousands ofdeaths, millions of dollars in damage to infrastructure, and incalculablehuman suffering. In recent years, satellite technology has been asignificant boon to response efforts following an earthquake and itsafter-effects by providing mobile communications between response teamsand remote sensing of damaged areas to disaster management organizations.In 2007, an international team of students and professionals assembledduring theInternational Space University’s Summer Session Program in Beijing, Chinato examine how satellite and ground-based technology could be betterintegrated to provide an optimised response in the event of an earthquake.The resulting Technology Resources for Earthquake MOnitoring and Response(TREMOR) proposal describes an integrative prototype response system thatwill implement mobile satellite communication hubs providing telephone anddata links between response teams, onsite telemedicine consultation foremergency first-responders, and satellite navigation systems that willlocate and track emergency vehicles and guide search-and-rescue crews. Aprototype earthquake simulation system is also proposed, integratinghistorical data, earthquake precursor data, and local geomatics andinfrastructure information to predict the damage that could occur in theevent of an earthquake. The backbone of these proposals is a comprehensiveeducation and training program to help individuals, communities andgovernments prepare in advance. The TREMOR team recommends thecoordination of these efforts through a centralised, non-governmentalorganization.
Resumo:
Biochemical systems are commonly modelled by systems of ordinary differential equations (ODEs). A particular class of such models called S-systems have recently gained popularity in biochemical system modelling. The parameters of an S-system are usually estimated from time-course profiles. However, finding these estimates is a difficult computational problem. Moreover, although several methods have been recently proposed to solve this problem for ideal profiles, relatively little progress has been reported for noisy profiles. We describe a special feature of a Newton-flow optimisation problem associated with S-system parameter estimation. This enables us to significantly reduce the search space, and also lends itself to parameter estimation for noisy data. We illustrate the applicability of our method by applying it to noisy time-course data synthetically produced from previously published 4- and 30-dimensional S-systems. In addition, we propose an extension of our method that allows the detection of network topologies for small S-systems. We introduce a new method for estimating S-system parameters from time-course profiles. We show that the performance of this method compares favorably with competing methods for ideal profiles, and that it also allows the determination of parameters for noisy profiles.
Resumo:
Evaluating the possible benefits of the introduction of genetically modified (GM) crops must address the issue of consumer resistance as well as the complex regulation that has ensued. In the European Union (EU) this regulation envisions the “co-existence” of GM food with conventional and quality-enhanced products, mandates the labelling and traceability of GM products, and allows only a stringent adventitious presence of GM content in other products. All these elements are brought together within a partial equilibrium model of the EU agricultural food sector. The model comprises conventional, GM and organic food. Demand is modelled in a novel fashion, whereby organic and conventional products are treated as horizontally differentiated but GM products are vertically differentiated (weakly inferior) relative to conventional ones. Supply accounts explicitly for the land constraint at the sector level and for the need for additional resources to produce organic food. Model calibration and simulation allow insights into the qualitative and quantitative effects of the large-scale introduction of GM products in the EU market. We find that the introduction of GM food reduces overall EU welfare, mostly because of the associated need for costly segregation of non-GM products, but the producers of quality-enhanced products actually benefit.
Resumo:
Meta-analysis of genome-wide association studies (GWASs) has led to the discoveries of many common variants associated with complex human diseases. There is a growing recognition that identifying "causal" rare variants also requires large-scale meta-analysis. The fact that association tests with rare variants are performed at the gene level rather than at the variant level poses unprecedented challenges in the meta-analysis. First, different studies may adopt different gene-level tests, so the results are not compatible. Second, gene-level tests require multivariate statistics (i.e., components of the test statistic and their covariance matrix), which are difficult to obtain. To overcome these challenges, we propose to perform gene-level tests for rare variants by combining the results of single-variant analysis (i.e., p values of association tests and effect estimates) from participating studies. This simple strategy is possible because of an insight that multivariate statistics can be recovered from single-variant statistics, together with the correlation matrix of the single-variant test statistics, which can be estimated from one of the participating studies or from a publicly available database. We show both theoretically and numerically that the proposed meta-analysis approach provides accurate control of the type I error and is as powerful as joint analysis of individual participant data. This approach accommodates any disease phenotype and any study design and produces all commonly used gene-level tests. An application to the GWAS summary results of the Genetic Investigation of ANthropometric Traits (GIANT) consortium reveals rare and low-frequency variants associated with human height. The relevant software is freely available.
Resumo:
The sample dimension, types of variables, format used for measurement, and construction of instruments to collect valid and reliable data must be considered during the research process. In the social and health sciences, and more specifically in nursing, data-collection instruments are usually composed of latent variables or variables that cannot be directly observed. Such facts emphasize the importance of deciding how to measure study variables (using an ordinal scale or a Likert or Likert-type scale). Psychometric scales are examples of instruments that are affected by the type of variables that comprise them, which could cause problems with measurement and statistical analysis (parametric tests versus non-parametric tests). Hence, investigators using these variables must rely on suppositions based on simulation studies or recommendations based on scientific evidence in order to make the best decisions.
Resumo:
A Method is offered that makes it possible to apply generalized canonicalcorrelations analysis (CANCOR) to two or more matrices of different row and column order. The new method optimizes the generalized canonical correlationanalysis objective by considering only the observed values. This is achieved byemploying selection matrices. We present and discuss fit measures to assessthe quality of the solutions. In a simulation study we assess the performance of our new method and compare it to an existing procedure called GENCOM,proposed by Green and Carroll. We find that our new method outperforms the GENCOM algorithm both with respect to model fit and recovery of the truestructure. Moreover, as our new method does not require any type of iteration itis easier to implement and requires less computation. We illustrate the methodby means of an example concerning the relative positions of the political parties inthe Netherlands based on provincial data.
Resumo:
We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.
Resumo:
This paper describes a simulation package designed to estimate the annual income taxes paid by respondents of the Swiss Household Panel (SHP). In Switzerland, the 26 cantons have their own tax system. Additionally, tax levels vary between the over 2000 municipalities and over time. The simulation package takes account of this complexity by building on existing tables on tax levels which are provided by the Swiss Federal Tax Administration Office. Because these are limited to a few types of households and only 812 municipalities, they have to be extended to cover all households and municipalities. A further drawback of these tables is that they neglect several deductions. The tax simulation package fills this gap by taking additionally account of deductions for children, double-earner couples, third pillar and support for dependent persons according to cantonal legislation. The resulting variable on direct taxes not only serves to calculate household income net of taxes, but can also be a variable for analysis by its own account.
Resumo:
The work presented evaluates the statistical characteristics of regional bias and expected error in reconstructions of real positron emission tomography (PET) data of human brain fluoro-deoxiglucose (FDG) studies carried out by the maximum likelihood estimator (MLE) method with a robust stopping rule, and compares them with the results of filtered backprojection (FBP) reconstructions and with the method of sieves. The task of evaluating radioisotope uptake in regions-of-interest (ROIs) is investigated. An assessment of bias and variance in uptake measurements is carried out with simulated data. Then, by using three different transition matrices with different degrees of accuracy and a components of variance model for statistical analysis, it is shown that the characteristics obtained from real human FDG brain data are consistent with the results of the simulation studies.
Resumo:
The impact of charcoal production on soil hydraulic properties, runoff response and erosion susceptibility were studied in both field and simulation experiments. Core and composite samples, from 12 randomly selected sites within the catchment of Kotokosu were taken from the 0-10 cm layer of a charcoal site soil (CSS) and adjacent field soils (AFS). These samples were used to determine saturated hydraulic conductivity (Ksat), bulk density, total porosity, soil texture and color. Infiltration, surface albedo and soil surface temperature were also measured in both CSS and AFS. Measured properties were used as entries in a rainfall runoff simulation experiment on a smooth (5 % slope) plot of 25 x 25 m grids with 10 cm resolutions. Typical rainfall intensities of the study watershed (high, moderate and low) were applied to five different combinations of Ks distributions that could be expected in this landscape. The results showed significantly (p < 0.01) higher flow characteristics of the soil under charcoal kilns (increase of 88 %). Infiltration was enhanced and runoff volume reduced significantly. The results showed runoff reduction of about 37 and 18 %, and runoff coefficient ranging from 0.47-0.75 and 0.04-0.39 or simulation based on high (200 mm h-1) and moderate (100 mm h-1) rainfall events over the CSS and AFS areas, respectively. Other potential impacts of charcoal production on watershed hydrology were described. The results presented, together with watershed measurements, when available, are expected to enhance understanding of the hydrological responses of ecosystems to indiscriminate charcoal production and related activities in this region.