969 resultados para computational models
Resumo:
Abstract Sitting between your past and your future doesn't mean you are in the present. Dakota Skye Complex systems science is an interdisciplinary field grouping under the same umbrella dynamical phenomena from social, natural or mathematical sciences. The emergence of a higher order organization or behavior, transcending that expected of the linear addition of the parts, is a key factor shared by all these systems. Most complex systems can be modeled as networks that represent the interactions amongst the system's components. In addition to the actual nature of the part's interactions, the intrinsic topological structure of underlying network is believed to play a crucial role in the remarkable emergent behaviors exhibited by the systems. Moreover, the topology is also a key a factor to explain the extraordinary flexibility and resilience to perturbations when applied to transmission and diffusion phenomena. In this work, we study the effect of different network structures on the performance and on the fault tolerance of systems in two different contexts. In the first part, we study cellular automata, which are a simple paradigm for distributed computation. Cellular automata are made of basic Boolean computational units, the cells; relying on simple rules and information from- the surrounding cells to perform a global task. The limited visibility of the cells can be modeled as a network, where interactions amongst cells are governed by an underlying structure, usually a regular one. In order to increase the performance of cellular automata, we chose to change its topology. We applied computational principles inspired by Darwinian evolution, called evolutionary algorithms, to alter the system's topological structure starting from either a regular or a random one. The outcome is remarkable, as the resulting topologies find themselves sharing properties of both regular and random network, and display similitudes Watts-Strogtz's small-world network found in social systems. Moreover, the performance and tolerance to probabilistic faults of our small-world like cellular automata surpasses that of regular ones. In the second part, we use the context of biological genetic regulatory networks and, in particular, Kauffman's random Boolean networks model. In some ways, this model is close to cellular automata, although is not expected to perform any task. Instead, it simulates the time-evolution of genetic regulation within living organisms under strict conditions. The original model, though very attractive by it's simplicity, suffered from important shortcomings unveiled by the recent advances in genetics and biology. We propose to use these new discoveries to improve the original model. Firstly, we have used artificial topologies believed to be closer to that of gene regulatory networks. We have also studied actual biological organisms, and used parts of their genetic regulatory networks in our models. Secondly, we have addressed the improbable full synchronicity of the event taking place on. Boolean networks and proposed a more biologically plausible cascading scheme. Finally, we tackled the actual Boolean functions of the model, i.e. the specifics of how genes activate according to the activity of upstream genes, and presented a new update function that takes into account the actual promoting and repressing effects of one gene on another. Our improved models demonstrate the expected, biologically sound, behavior of previous GRN model, yet with superior resistance to perturbations. We believe they are one step closer to the biological reality.
Resumo:
Using a suitable Hull and White type formula we develop a methodology to obtain asecond order approximation to the implied volatility for very short maturities. Using thisapproximation we accurately calibrate the full set of parameters of the Heston model. Oneof the reasons that makes our calibration for short maturities so accurate is that we alsotake into account the term-structure for large maturities. We may say that calibration isnot "memoryless", in the sense that the option's behavior far away from maturity doesinfluence calibration when the option gets close to expiration. Our results provide a wayto perform a quick calibration of a closed-form approximation to vanilla options that canthen be used to price exotic derivatives. The methodology is simple, accurate, fast, andit requires a minimal computational cost.
Resumo:
A new direction of research in Competitive Location theory incorporatestheories of Consumer Choice Behavior in its models. Following thisdirection, this paper studies the importance of consumer behavior withrespect to distance or transportation costs in the optimality oflocations obtained by traditional Competitive Location models. To dothis, it considers different ways of defining a key parameter in thebasic Maximum Capture model (MAXCAP). This parameter will reflectvarious ways of taking into account distance based on several ConsumerChoice Behavior theories. The optimal locations and the deviation indemand captured when the optimal locations of the other models are usedinstead of the true ones, are computed for each model. A metaheuristicbased on GRASP and Tabu search procedure is presented to solve all themodels. Computational experience and an application to 55-node networkare also presented.
Resumo:
Models are presented for the optimal location of hubs in airline networks, that take into consideration the congestion effects. Hubs, which are the most congested airports, are modeled as M/D/c queuing systems, that is, Poisson arrivals, deterministic service time, and {\em c} servers. A formula is derived for the probability of a number of customers in the system, which is later used to propose a probabilistic constraint. This constraint limits the probability of {\em b} airplanes in queue, to be lesser than a value $\alpha$. Due to the computational complexity of the formulation. The model is solved using a meta-heuristic based on tabu search. Computational experience is presented.
Resumo:
In this paper we address the issue of locating hierarchical facilities in the presence of congestion. Two hierarchical models are presented, where lower level servers attend requests first, and then, some of the served customers are referred to higher level servers. In the first model, the objective is to find the minimum number of servers and theirlocations that will cover a given region with a distance or time standard. The second model is cast as a Maximal Covering Location formulation. A heuristic procedure is then presented together with computational experience. Finally, some extensions of these models that address other types of spatial configurations are offered.
Resumo:
In recent years, both homing endonucleases (HEases) and zinc-finger nucleases (ZFNs) have been engineered and selected for the targeting of desired human loci for gene therapy. However, enzyme engineering is lengthy and expensive and the off-target effect of the manufactured endonucleases is difficult to predict. Moreover, enzymes selected to cleave a human DNA locus may not cleave the homologous locus in the genome of animal models because of sequence divergence, thus hampering attempts to assess the in vivo efficacy and safety of any engineered enzyme prior to its application in human trials. Here, we show that naturally occurring HEases can be found, that cleave desirable human targets. Some of these enzymes are also shown to cleave the homologous sequence in the genome of animal models. In addition, the distribution of off-target effects may be more predictable for native HEases. Based on our experimental observations, we present the HomeBase algorithm, database and web server that allow a high-throughput computational search and assignment of HEases for the targeting of specific loci in the human and other genomes. We validate experimentally the predicted target specificity of candidate fungal, bacterial and archaeal HEases using cell free, yeast and archaeal assays.
Resumo:
To investigate their role in receptor coupling to G(q), we mutated all basic amino acids and some conserved hydrophobic residues of the cytosolic surface of the alpha(1b)-adrenergic receptor (AR). The wild type and mutated receptors were expressed in COS-7 cells and characterized for their ligand binding properties and ability to increase inositol phosphate accumulation. The experimental results have been interpreted in the context of both an ab initio model of the alpha(1b)-AR and of a new homology model built on the recently solved crystal structure of rhodopsin. Among the twenty-three basic amino acids mutated only mutations of three, Arg(254) and Lys(258) in the third intracellular loop and Lys(291) at the cytosolic extension of helix 6, markedly impaired the receptor-mediated inositol phosphate production. Additionally, mutations of two conserved hydrophobic residues, Val(147) and Leu(151) in the second intracellular loop had significant effects on receptor function. The functional analysis of the receptor mutants in conjunction with the predictions of molecular modeling supports the hypothesis that Arg(254), Lys(258), as well as Leu(151) are directly involved in receptor-G protein interaction and/or receptor-mediated activation of the G protein. In contrast, the residues belonging to the cytosolic extensions of helices 3 and 6 play a predominant role in the activation process of the alpha(1b)-AR. These findings contribute to the delineation of the molecular determinants of the alpha(1b)-AR/G(q) interface.
Resumo:
BACKGROUND: Qualitative frameworks, especially those based on the logical discrete formalism, are increasingly used to model regulatory and signalling networks. A major advantage of these frameworks is that they do not require precise quantitative data, and that they are well-suited for studies of large networks. While numerous groups have developed specific computational tools that provide original methods to analyse qualitative models, a standard format to exchange qualitative models has been missing. RESULTS: We present the Systems Biology Markup Language (SBML) Qualitative Models Package ("qual"), an extension of the SBML Level 3 standard designed for computer representation of qualitative models of biological networks. We demonstrate the interoperability of models via SBML qual through the analysis of a specific signalling network by three independent software tools. Furthermore, the collective effort to define the SBML qual format paved the way for the development of LogicalModel, an open-source model library, which will facilitate the adoption of the format as well as the collaborative development of algorithms to analyse qualitative models. CONCLUSIONS: SBML qual allows the exchange of qualitative models among a number of complementary software tools. SBML qual has the potential to promote collaborative work on the development of novel computational approaches, as well as on the specification and the analysis of comprehensive qualitative models of regulatory and signalling networks.
Resumo:
A haplotype is an m-long binary vector. The XOR-genotype of two haplotypes is the m-vector of their coordinate-wise XOR. We study the following problem: Given a set of XOR-genotypes, reconstruct their haplotypes so that the set of resulting haplotypes can be mapped onto a perfect phylogeny (PP) tree. The question is motivated by studying population evolution in human genetics, and is a variant of the perfect phylogeny haplotyping problem that has received intensive attention recently. Unlike the latter problem, in which the input is "full" genotypes, here we assume less informative input, and so may be more economical to obtain experimentally. Building on ideas of Gusfield, we show how to solve the problem in polynomial time, by a reduction to the graph realization problem. The actual haplotypes are not uniquely determined by that tree they map onto, and the tree itself may or may not be unique. We show that tree uniqueness implies uniquely determined haplotypes, up to inherent degrees of freedom, and give a sufficient condition for the uniqueness. To actually determine the haplotypes given the tree, additional information is necessary. We show that two or three full genotypes suffice to reconstruct all the haplotypes, and present a linear algorithm for identifying those genotypes.
Resumo:
(1R)-Normetanephrine is the natural stereoisomeric substrate for sulfotransferase 1A3 (SULT1A3)-catalyzed sulfonation. Nothing appears known on the enantioselectivity of the reaction despite its potential significance in the metabolism of adrenergic amines and in clinical biochemistry. We confronted the kinetic parameters of the sulfoconjugation of synthetic (1R)-normetanephrine and (1S)-normetanephrine by recombinant human SULT1A3 to a docking model of each normetanephrine enantiomer with SULT1A3 and the 3'-phosphoadenosine-5'-phosphosulfate cofactor on the basis of molecular modeling and molecular dynamics simulations of the stability of the complexes. The K(M) , V(max) , and k(cat) values for the sulfonation of (1R)-normetanephrine, (1S)-normetanephrine, and racemic normetanephrine were similar. In silico models were consistent with these findings as they showed that the binding modes of the two enantiomers were almost identical. In conclusion, SULT1A3 is not substrate-enantioselective toward normetanephrine, an unexpected finding explainable by a mutual adaptability between the ligands and SULT1A3 through an "induced-fit model" in the catalytic pocket. Chirality, 00:000-000, 2012.© 2012 Wiley Periodicals, Inc.
Resumo:
PURPOSE OF REVIEW: HIV targets primary CD4(+) T cells. The virus depends on the physiological state of its target cells for efficient replication, and, in turn, viral infection perturbs the cellular state significantly. Identifying the virus-host interactions that drive these dynamic changes is important for a better understanding of viral pathogenesis and persistence. The present review focuses on experimental and computational approaches to study the dynamics of viral replication and latency. RECENT FINDINGS: It was recently shown that only a fraction of the inducible latently infected reservoirs are successfully induced upon stimulation in ex-vivo models while additional rounds of stimulation make allowance for reactivation of more latently infected cells. This highlights the potential role of treatment duration and timing as important factors for successful reactivation of latently infected cells. The dynamics of HIV productive infection and latency have been investigated using transcriptome and proteome data. The cellular activation state has shown to be a major determinant of viral reactivation success. Mathematical models of latency have been used to explore the dynamics of the latent viral reservoir decay. SUMMARY: Timing is an important component of biological interactions. Temporal analyses covering aspects of viral life cycle are essential for gathering a comprehensive picture of HIV interaction with the host cell and untangling the complexity of latency. Understanding the dynamic changes tipping the balance between success and failure of HIV particle production might be key to eradicate the viral reservoir.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Computational anatomy with magnetic resonance imaging (MRI) is well established as a noninvasive biomarker of Alzheimer's disease (AD); however, there is less certainty about its dependency on the staging of AD. We use classical group analyses and automated machine learning classification of standard structural MRI scans to investigate AD diagnostic accuracy from the preclinical phase to clinical dementia. Longitudinal data from the Alzheimer's Disease Neuroimaging Initiative were stratified into 4 groups according to the clinical status-(1) AD patients; (2) mild cognitive impairment (MCI) converters; (3) MCI nonconverters; and (4) healthy controls-and submitted to a support vector machine. The obtained classifier was significantly above the chance level (62%) for detecting AD already 4 years before conversion from MCI. Voxel-based univariate tests confirmed the plausibility of our findings detecting a distributed network of hippocampal-temporoparietal atrophy in AD patients. We also identified a subgroup of control subjects with brain structure and cognitive changes highly similar to those observed in AD. Our results indicate that computational anatomy can detect AD substantially earlier than suggested by current models. The demonstrated differential spatial pattern of atrophy between correctly and incorrectly classified AD patients challenges the assumption of a uniform pathophysiological process underlying clinically identified AD.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flow computation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we de- velop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional mul- tilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrec- tional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimiza- tion search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow com- putation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.