964 resultados para Computational modelling by homology
Resumo:
En la investigació de la complexació de metalls mitjançant eines electroanalítiques són emprades dues aproximacions generals. La primera, anomenada de modelatge dur (hardmodelling), es basa en la formulació d'un model fisicoquímic conjunt per als processos electròdic i de complexació i en la resolució analítica o numèrica del model. Posteriorment, l'ajust dels paràmetres del model a les dades experimentals donarà la informació desitjada sobre el procés de complexació. La segona aproximació, anomenada de modelatge tou (soft-modelling), es basa en la identificació d'un model de complexació a partir de l'anàlisi numèrica i estadística de les dades, sense cap assumpció prèvia d'un model. Aquesta aproximació, que ha estat extensivament emprada amb dades espectroscòpiques, ho ha estat poquíssim amb dades electroquímiques. En aquest article tractem de la formulació d'un model (hard-modelling) per a la complexació de metalls en sistemes amb mescles de lligands, incloent-hi lligands macromoleculars, i de l'aplicació d
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Background: It has been shown in a variety of organisms, including mammals, that genes that appeared recently in evolution, for example orphan genes, evolve faster than older genes. Low functional constraints at the time of origin of novel genes may explain these results. However, this observation has been recently attributed to an artifact caused by the inability of Blast to detect the fastest genes in different eukaryotic genomes. Distinguishing between these two possible explanations would be of great importance for any studies dealing with the taxon distribution of proteins and the origin of novel genes. Results: Here we used simulations of protein sequences to examine the capacity of Blast to detect proteins of diverse evolutionary rates in the different species of an eukaryotic phylogenetic tree that included metazoans, fungi and plants. We simulated the evolution of protein genes with the same evolutionary rates than those observed in functional mammalian genes and with among-site rate heterogeneity. Under these conditions, we found that only a very small percentage of simulated ancestral eukaryotic proteins was affected by the Blast artifact. We show that the good detectability of Blast is due to the heterogeneity of protein evolutionary rates at different sites, since only a small conserved motif in a sequence suffices to detect its homologues. Our results indicate that Blast, at least when applied within eukaryotes, only misses homologues of extremely fast-evolving sequences, which are rare in the mammalian genome, as well as sequences evolving homogeneously or pseudogenes.Conclusion: Although great care should be exercised in the recognition of remote homologues, most functional mammalian genes can be detected in eukaryotic genomes by Blast. That is, the majority of functional mammalian genes are not as fast as for not being detected in other metazoans, fungi or plants, if they had been present in these organisms. Thus, the correlation previously found between age and rate seems not to be due to a pure Blast artifact, at least for mammals. This may have important implications to understand the mechanisms by which novel genes originate.
Resumo:
The M-Coffee server is a web server that makes it possible to compute multiple sequence alignments (MSAs) by running several MSA methods and combining their output into one single model. This allows the user to simultaneously run all his methods of choice without having to arbitrarily choose one of them. The MSA is delivered along with a local estimation of its consistency with the individual MSAs it was derived from. The computation of the consensus multiple alignment is carried out using a special mode of the T-Coffee package [Notredame, Higgins and Heringa (T-Coffee: a novel method for fast and accurate multiple sequence alignment. J. Mol. Biol. 2000; 302: 205-217); Wallace, O'Sullivan, Higgins and Notredame (M-Coffee: combining multiple sequence alignment methods with T-Coffee. Nucleic Acids Res. 2006; 34: 1692-1699)] Given a set of sequences (DNA or proteins) in FASTA format, M-Coffee delivers a multiple alignment in the most common formats. M-Coffee is a freeware open source package distributed under a GPL license and it is available either as a standalone package or as a web service from www.tcoffee.org.
Resumo:
Abstract One of the most important issues in molecular biology is to understand regulatory mechanisms that control gene expression. Gene expression is often regulated by proteins, called transcription factors which bind to short (5 to 20 base pairs),degenerate segments of DNA. Experimental efforts towards understanding the sequence specificity of transcription factors is laborious and expensive, but can be substantially accelerated with the use of computational predictions. This thesis describes the use of algorithms and resources for transcriptionfactor binding site analysis in addressing quantitative modelling, where probabilitic models are built to represent binding properties of a transcription factor and can be used to find new functional binding sites in genomes. Initially, an open-access database(HTPSELEX) was created, holding high quality binding sequences for two eukaryotic families of transcription factors namely CTF/NF1 and LEFT/TCF. The binding sequences were elucidated using a recently described experimental procedure called HTP-SELEX, that allows generation of large number (> 1000) of binding sites using mass sequencing technology. For each HTP-SELEX experiments we also provide accurate primary experimental information about the protein material used, details of the wet lab protocol, an archive of sequencing trace files, and assembled clone sequences of binding sequences. The database also offers reasonably large SELEX libraries obtained with conventional low-throughput protocols.The database is available at http://wwwisrec.isb-sib.ch/htpselex/ and and ftp://ftp.isrec.isb-sib.ch/pub/databases/htpselex. The Expectation-Maximisation(EM) algorithm is one the frequently used methods to estimate probabilistic models to represent the sequence specificity of transcription factors. We present computer simulations in order to estimate the precision of EM estimated models as a function of data set parameters(like length of initial sequences, number of initial sequences, percentage of nonbinding sequences). We observed a remarkable robustness of the EM algorithm with regard to length of training sequences and the degree of contamination. The HTPSELEX database and the benchmarked results of the EM algorithm formed part of the foundation for the subsequent project, where a statistical framework called hidden Markov model has been developed to represent sequence specificity of the transcription factors CTF/NF1 and LEF1/TCF using the HTP-SELEX experiment data. The hidden Markov model framework is capable of both predicting and classifying CTF/NF1 and LEF1/TCF binding sites. A covariance analysis of the binding sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism. We next tested the LEF1/TCF model by computing binding scores for a set of LEF1/TCF binding sequences for which relative affinities were determined experimentally using non-linear regression. The predicted and experimentally determined binding affinities were in good correlation.
Resumo:
Ydinvoimalaitokset on suunniteltu ja rakennettu niin, että niillä on kyky selviytyä erilaisista käyttöhäiriöistä ja onnettomuuksista ilman laitoksen vahingoittumista sekä väestön ja ympäristön vaarantumista. On erittäin epätodennäköistä, että ydinvoimalaitosonnettomuus etenee reaktorisydämen vaurioitumiseen asti, minkä seurauksena sydänmateriaalien hapettuminen voi tuottaa vetyä. Jäädytyspiirin rikkoutumisen myötä vety saattaa kulkeutua ydinvoimalaitoksen suojarakennukseen, jossa se voi muodostaa palavan seoksen ilman hapen kanssa ja palaa tai jopa räjähtää. Vetypalosta aiheutuvat lämpötila- ja painekuormitukset vaarantavat suojarakennuksen eheyden ja suojarakennuksen sisällä olevien turvajärjestelmien toimivuuden, joten tehokas ja luotettava vedynhallintajärjestelmä on tarpeellinen. Passiivisia autokatalyyttisiä vetyrekombinaattoreita käytetäänyhä useammissa Euroopan ydinvoimaitoksissa vedynhallintaan. Nämä rekombinaattorit poistavat vetyä katalyyttisellä reaktiolla vedyn reagoidessa katalyytin pinnalla hapen kanssa muodostaen vesihöyryä. Rekombinaattorit ovat täysin passiivisiaeivätkä tarvitse ulkoista energiaa tai operaattoritoimintaa käynnistyäkseen taitoimiakseen. Rekombinaattoreiden käyttäytymisen tutkimisellatähdätään niiden toimivuuden selvittämiseen kaikissa mahdollisissa onnettomuustilanteissa, niiden suunnittelun optimoimiseen sekä niiden optimaalisen lukumäärän ja sijainnin määrittämiseen suojarakennuksessa. Suojarakennuksen mallintamiseen käytetään joko keskiarvoistavia ohjelmia (Lumped parameter (LP) code), moniulotteisia virtausmalliohjelmia (Computational Fluid Dynamics, CFD) tai näiden yhdistelmiä. Rekombinaattoreiden mallintaminen on toteutettu näissä ohjelmissa joko kokeellisella, teoreettisella tai yleisellä (eng. Global Approach) mallilla. Tämä diplomityö sisältää tulokset TONUS OD-ohjelman sisältämän Siemens FR90/1-150 rekombinaattorin mallin vedynkulutuksen tarkistuslaskuista ja TONUS OD-ohjelmalla suoritettujen laskujen tulokset Siemens rekombinaattoreiden vuorovaikutuksista. TONUS on CEA:n (Commissariat à 1'En¬ergie Atomique) kehittämä LP (OD) ja CFD -vetyanalyysiohjelma, jota käytetään vedyn jakautumisen, palamisenja detonaation mallintamiseen. TONUS:sta käytetään myös vedynpoiston mallintamiseen passiivisilla autokatalyyttisillä rekombinaattoreilla. Vedynkulutukseen vaikuttavat tekijät eroteltiin ja tutkittiin yksi kerrallaan. Rekombinaattoreiden vuorovaikutuksia tutkittaessa samaan tilavuuteen sijoitettiin eri kokoisia ja eri lukumäärä rekombinaattoreita. Siemens rekombinaattorimalli TONUS OD-ohjelmassa laskee vedynkulutuksen kuten oletettiin ja tulokset vahvistavat TONUS OD-ohjelman fysikaalisen laskennan luotettavuuden. Mahdollisia paikallisia jakautumia tutkitussa tilavuudessa ei voitu havaita LP-ohjelmalla, koska se käyttäälaskennassa suureiden tilavuuskeskiarvoja. Paikallisten jakautumien tutkintaan tarvitaan CFD -laskentaohjelma.
Resumo:
Päästöjen vähentäminen on ollut viime vuosina tärkeässä osassa polttomoottoreita kehitettäessä.Monet viralliset tahot asettavat uusia tiukempia päästörajoituksia. Päästörajatovat tyypillisesti olleet tiukimmat autoteollisuuden valmistamille pienille nopeakäyntisille diesel-moottoreille, mutta viime aikoina paineita on kohdistunut myös suurempiin keskinopeisiin ja hidaskäyntisiin diesel-moottoreihin. Päästörajat ovat erilaisia riippuen moottorin tyypistä, käytetystä polttoaineesta ja paikasta missä moottoria käytetään johtuen erilaisista paikallisista laeista ja asetuksista. Eniten huomiota diesel-moottorin päästöissä täytyy kohdistaa typen oksideihin, savun muodostukseen sekä partikkeleihin. Laskennallisen virtausmekaniikan (CFD) avulla on hyvät mahdollisuudet tutkia diesel-moottorin sylinterissä tapahtuvia ilmiöitä palamisen aikana. CFD on hyödyllinen työkalu arvioitaessa moottorin suorituskykyä ja päästöjen muodostumista. CFD:llä on mahdollista testata erilaisten parametrien ja geometrioiden vaikutusta ilman kalliita moottorinkoeajoja. CFD:tä voidaan käyttää myös opetustarkoituksessa lisäämään paloprosessin tuntemusta. Tulevaisuudessa palamissimuloinnit CFD:llä tulevat epäilemättä olemaan tärkeä osa moottorin kehityksessä. Tässä diplomityössä on tehty palamissimuloinnit kahteen erilaisilla poittoaineenruiskutuslaitteistoilla varustettuun Wärtsilän keskinopeaan diesel-moottoriin. W46 moottorin ruiskutuslaitteisto on perinteinen mekaanisesti ohjattu pumppusuutin ja W46-CR moottorissa on elektronisesti ohjattu 'common rail' ruiskutuslaitteisto. Näiden moottorien ja käytössä olevien ruiskutusprofiilien lisäksi on simuloinneilla testattu erilaisia uusia ruiskutusprofiileja, jotta erityyppisten profiilien hyvät ja huonot ominaisuudet tulisivat selville. Matalalla kuormalla kiinnostuksen kohteena on nokipäästöjen muodostus ja täydellä kuormalla NOx-päästöjen muodostus ja polttoaineen kulutus. Simulointien tulokset osoittivat, että noen muodostusta matalalla kuormalla voidaan selvästi vähentää monivaiheisella ruiskutuksella, jossa yksi ruiskutusjakso jaetaan kahteen tai useampaan jaksoon. Erityisen tehokas noen vähentämisessä vaikuttaa olevan ns. jälkiruiskutus (post injection). Matalat NOx-päästöt ja hyvä polttoaineen kulutus täydellä kuormalla on mahdollista saavuttaaasteittain nostettavalla ruiskutusnopeudella.
Resumo:
Työn tavoitteena oli mallintaa uuden tuoteominaisuuden aiheuttamat lisäkustannukset ja suunnitella päätöksenteon työkalu Timberjack Oy:n kuormatraktorivalmistuksen johtoryhmälle. Tarkoituksena oli luoda karkean tason malli, joka sopisi eri tyyppisten tuoteominaisuuksien kustannuksien selvittämiseen. Uuden tuoteominaisuuden vaikutusta yrityksen eri toimintoihin selvitettiin haastatteluin. Haastattelukierroksen tukena käytettiin kysymyslomaketta. Haastattelujen tavoitteena oli selvittää prosessit, toiminnot ja resurssit, jotka ovat välttämättömiä uuden tuoteominaisuuden tuotantoon saattamisessa ja tuotannossa. Malli suunniteltiin haastattelujen ja tietojärjestelmästä hankitun tiedon pohjalta. Mallin rungon muodostivat ne prosessit ja toiminnot, joihin uudella tuoteominaisuudella on vaikutusta. Huomioon otettiin sellaiset resurssit, joita uusi tuoteominaisuus kuluttaa joko välittömästi, tai välillisesti. Tarkasteluun sisällytettiin ainoastaan lisäkustannukset. Uuden tuoteominaisuuden toteuttamisesta riippumattomat, joka tapauksessa toteutuvat yleiskustannukset jätettiin huomioimatta. Malli on yleistys uuden tuoteominaisuuden aiheuttamista lisäkustannuksista, koska tarkoituksena on, että se sopii eri tyyppisten tuoteominaisuuksien aiheuttamien kustannusten selvittämiseen. Lisäksi malli soveltuu muiden pienehköjen tuotemuutosten kustannusten kartoittamiseen.
Resumo:
Angiogenesis plays a key role in tumor growth and cancer progression. TIE-2-expressing monocytes (TEM) have been reported to critically account for tumor vascularization and growth in mouse tumor experimental models, but the molecular basis of their pro-angiogenic activity are largely unknown. Moreover, differences in the pro-angiogenic activity between blood circulating and tumor infiltrated TEM in human patients has not been established to date, hindering the identification of specific targets for therapeutic intervention. In this work, we investigated these differences and the phenotypic reversal of breast tumor pro-angiogenic TEM to a weak pro-angiogenic phenotype by combining Boolean modelling and experimental approaches. Firstly, we show that in breast cancer patients the pro-angiogenic activity of TEM increased drastically from blood to tumor, suggesting that the tumor microenvironment shapes the highly pro-angiogenic phenotype of TEM. Secondly, we predicted in silico all minimal perturbations transitioning the highly pro-angiogenic phenotype of tumor TEM to the weak pro-angiogenic phenotype of blood TEM and vice versa. In silico predicted perturbations were validated experimentally using patient TEM. In addition, gene expression profiling of TEM transitioned to a weak pro-angiogenic phenotype confirmed that TEM are plastic cells and can be reverted to immunological potent monocytes. Finally, the relapse-free survival analysis showed a statistically significant difference between patients with tumors with high and low expression values for genes encoding transitioning proteins detected in silico and validated on patient TEM. In conclusion, the inferred TEM regulatory network accurately captured experimental TEM behavior and highlighted crosstalk between specific angiogenic and inflammatory signaling pathways of outstanding importance to control their pro-angiogenic activity. Results showed the successful in vitro reversion of such an activity by perturbation of in silico predicted target genes in tumor derived TEM, and indicated that targeting tumor TEM plasticity may constitute a novel valid therapeutic strategy in breast cancer.
Resumo:
1. Species distribution models (SDMs) have become a standard tool in ecology and applied conservation biology. Modelling rare and threatened species is particularly important for conservation purposes. However, modelling rare species is difficult because the combination of few occurrences and many predictor variables easily leads to model overfitting. A new strategy using ensembles of small models was recently developed in an attempt to overcome this limitation of rare species modelling and has been tested successfully for only a single species so far. Here, we aim to test the approach more comprehensively on a large number of species including a transferability assessment. 2. For each species numerous small (here bivariate) models were calibrated, evaluated and averaged to an ensemble weighted by AUC scores. These 'ensembles of small models' (ESMs) were compared to standard Species Distribution Models (SDMs) using three commonly used modelling techniques (GLM, GBM, Maxent) and their ensemble prediction. We tested 107 rare and under-sampled plant species of conservation concern in Switzerland. 3. We show that ESMs performed significantly better than standard SDMs. The rarer the species, the more pronounced the effects were. ESMs were also superior to standard SDMs and their ensemble when they were independently evaluated using a transferability assessment. 4. By averaging simple small models to an ensemble, ESMs avoid overfitting without losing explanatory power through reducing the number of predictor variables. They further improve the reliability of species distribution models, especially for rare species, and thus help to overcome limitations of modelling rare species.
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
The results shown in this thesis are based on selected publications of the 2000s decade. The work was carried out in several national and EC funded public research projects and in close cooperation with industrial partners. The main objective of the thesis was to study and quantify the most important phenomena of circulating fluidized bed combustors by developing and applying proper experimental and modelling methods using laboratory scale equipments. An understanding of the phenomena plays an essential role in the development of combustion and emission performance, and the availability and controls of CFB boilers. Experimental procedures to study fuel combustion behaviour under CFB conditions are presented in the thesis. Steady state and dynamic measurements under well controlled conditions were carried out to produce the data needed for the development of high efficiency, utility scale CFB technology. The importance of combustion control and furnace dynamics is emphasized when CFB boilers are scaled up with a once through steam cycle. Qualitative information on fuel combustion characteristics was obtained directly by comparing flue gas oxygen responses during the impulse change experiments with fuel feed. A one-dimensional, time dependent model was developed to analyse the measurement data Emission formation was studied combined with fuel combustion behaviour. Correlations were developed for NO, N2O, CO and char loading, as a function of temperature and oxygen concentration in the bed area. An online method to characterize char loading under CFB conditions was developed and validated with the pilot scale CFB tests. Finally, a new method to control air and fuel feeds in CFB combustion was introduced. The method is based on models and an analysis of the fluctuation of the flue gas oxygen concentration. The effect of high oxygen concentrations on fuel combustion behaviour was also studied to evaluate the potential of CFB boilers to apply oxygenfiring technology to CCS. In future studies, it will be necessary to go through the whole scale up chain from laboratory phenomena devices through pilot scale test rigs to large scale, commercial boilers in order to validate the applicability and scalability of the, results. This thesis shows the chain between the laboratory scale phenomena test rig (bench scale) and the CFB process test rig (pilot). CFB technology has been scaled up successfully from an industrial scale to a utility scale during the last decade. The work shown in the thesis, for its part, has supported the development by producing new detailed information on combustion under CFB conditions.