973 resultados para computational modelling
Resumo:
1. The ecological niche is a fundamental biological concept. Modelling species' niches is central to numerous ecological applications, including predicting species invasions, identifying reservoirs for disease, nature reserve design and forecasting the effects of anthropogenic and natural climate change on species' ranges. 2. A computational analogue of Hutchinson's ecological niche concept (the multidimensional hyperspace of species' environmental requirements) is the support of the distribution of environments in which the species persist. Recently developed machine-learning algorithms can estimate the support of such high-dimensional distributions. We show how support vector machines can be used to map ecological niches using only observations of species presence to train distribution models for 106 species of woody plants and trees in a montane environment using up to nine environmental covariates. 3. We compared the accuracy of three methods that differ in their approaches to reducing model complexity. We tested models with independent observations of both species presence and species absence. We found that the simplest procedure, which uses all available variables and no pre-processing to reduce correlation, was best overall. Ecological niche models based on support vector machines are theoretically superior to models that rely on simulating pseudo-absence data and are comparable in empirical tests. 4. Synthesis and applications. Accurate species distribution models are crucial for effective environmental planning, management and conservation, and for unravelling the role of the environment in human health and welfare. Models based on distribution estimation rather than classification overcome theoretical and practical obstacles that pervade species distribution modelling. In particular, ecological niche models based on machine-learning algorithms for estimating the support of a statistical distribution provide a promising new approach to identifying species' potential distributions and to project changes in these distributions as a result of climate change, land use and landscape alteration.
Resumo:
Abstract One of the most important issues in molecular biology is to understand regulatory mechanisms that control gene expression. Gene expression is often regulated by proteins, called transcription factors which bind to short (5 to 20 base pairs),degenerate segments of DNA. Experimental efforts towards understanding the sequence specificity of transcription factors is laborious and expensive, but can be substantially accelerated with the use of computational predictions. This thesis describes the use of algorithms and resources for transcriptionfactor binding site analysis in addressing quantitative modelling, where probabilitic models are built to represent binding properties of a transcription factor and can be used to find new functional binding sites in genomes. Initially, an open-access database(HTPSELEX) was created, holding high quality binding sequences for two eukaryotic families of transcription factors namely CTF/NF1 and LEFT/TCF. The binding sequences were elucidated using a recently described experimental procedure called HTP-SELEX, that allows generation of large number (> 1000) of binding sites using mass sequencing technology. For each HTP-SELEX experiments we also provide accurate primary experimental information about the protein material used, details of the wet lab protocol, an archive of sequencing trace files, and assembled clone sequences of binding sequences. The database also offers reasonably large SELEX libraries obtained with conventional low-throughput protocols.The database is available at http://wwwisrec.isb-sib.ch/htpselex/ and and ftp://ftp.isrec.isb-sib.ch/pub/databases/htpselex. The Expectation-Maximisation(EM) algorithm is one the frequently used methods to estimate probabilistic models to represent the sequence specificity of transcription factors. We present computer simulations in order to estimate the precision of EM estimated models as a function of data set parameters(like length of initial sequences, number of initial sequences, percentage of nonbinding sequences). We observed a remarkable robustness of the EM algorithm with regard to length of training sequences and the degree of contamination. The HTPSELEX database and the benchmarked results of the EM algorithm formed part of the foundation for the subsequent project, where a statistical framework called hidden Markov model has been developed to represent sequence specificity of the transcription factors CTF/NF1 and LEF1/TCF using the HTP-SELEX experiment data. The hidden Markov model framework is capable of both predicting and classifying CTF/NF1 and LEF1/TCF binding sites. A covariance analysis of the binding sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism. We next tested the LEF1/TCF model by computing binding scores for a set of LEF1/TCF binding sequences for which relative affinities were determined experimentally using non-linear regression. The predicted and experimentally determined binding affinities were in good correlation.
Resumo:
Ydinvoimalaitokset on suunniteltu ja rakennettu niin, että niillä on kyky selviytyä erilaisista käyttöhäiriöistä ja onnettomuuksista ilman laitoksen vahingoittumista sekä väestön ja ympäristön vaarantumista. On erittäin epätodennäköistä, että ydinvoimalaitosonnettomuus etenee reaktorisydämen vaurioitumiseen asti, minkä seurauksena sydänmateriaalien hapettuminen voi tuottaa vetyä. Jäädytyspiirin rikkoutumisen myötä vety saattaa kulkeutua ydinvoimalaitoksen suojarakennukseen, jossa se voi muodostaa palavan seoksen ilman hapen kanssa ja palaa tai jopa räjähtää. Vetypalosta aiheutuvat lämpötila- ja painekuormitukset vaarantavat suojarakennuksen eheyden ja suojarakennuksen sisällä olevien turvajärjestelmien toimivuuden, joten tehokas ja luotettava vedynhallintajärjestelmä on tarpeellinen. Passiivisia autokatalyyttisiä vetyrekombinaattoreita käytetäänyhä useammissa Euroopan ydinvoimaitoksissa vedynhallintaan. Nämä rekombinaattorit poistavat vetyä katalyyttisellä reaktiolla vedyn reagoidessa katalyytin pinnalla hapen kanssa muodostaen vesihöyryä. Rekombinaattorit ovat täysin passiivisiaeivätkä tarvitse ulkoista energiaa tai operaattoritoimintaa käynnistyäkseen taitoimiakseen. Rekombinaattoreiden käyttäytymisen tutkimisellatähdätään niiden toimivuuden selvittämiseen kaikissa mahdollisissa onnettomuustilanteissa, niiden suunnittelun optimoimiseen sekä niiden optimaalisen lukumäärän ja sijainnin määrittämiseen suojarakennuksessa. Suojarakennuksen mallintamiseen käytetään joko keskiarvoistavia ohjelmia (Lumped parameter (LP) code), moniulotteisia virtausmalliohjelmia (Computational Fluid Dynamics, CFD) tai näiden yhdistelmiä. Rekombinaattoreiden mallintaminen on toteutettu näissä ohjelmissa joko kokeellisella, teoreettisella tai yleisellä (eng. Global Approach) mallilla. Tämä diplomityö sisältää tulokset TONUS OD-ohjelman sisältämän Siemens FR90/1-150 rekombinaattorin mallin vedynkulutuksen tarkistuslaskuista ja TONUS OD-ohjelmalla suoritettujen laskujen tulokset Siemens rekombinaattoreiden vuorovaikutuksista. TONUS on CEA:n (Commissariat à 1'En¬ergie Atomique) kehittämä LP (OD) ja CFD -vetyanalyysiohjelma, jota käytetään vedyn jakautumisen, palamisenja detonaation mallintamiseen. TONUS:sta käytetään myös vedynpoiston mallintamiseen passiivisilla autokatalyyttisillä rekombinaattoreilla. Vedynkulutukseen vaikuttavat tekijät eroteltiin ja tutkittiin yksi kerrallaan. Rekombinaattoreiden vuorovaikutuksia tutkittaessa samaan tilavuuteen sijoitettiin eri kokoisia ja eri lukumäärä rekombinaattoreita. Siemens rekombinaattorimalli TONUS OD-ohjelmassa laskee vedynkulutuksen kuten oletettiin ja tulokset vahvistavat TONUS OD-ohjelman fysikaalisen laskennan luotettavuuden. Mahdollisia paikallisia jakautumia tutkitussa tilavuudessa ei voitu havaita LP-ohjelmalla, koska se käyttäälaskennassa suureiden tilavuuskeskiarvoja. Paikallisten jakautumien tutkintaan tarvitaan CFD -laskentaohjelma.
Resumo:
Päästöjen vähentäminen on ollut viime vuosina tärkeässä osassa polttomoottoreita kehitettäessä.Monet viralliset tahot asettavat uusia tiukempia päästörajoituksia. Päästörajatovat tyypillisesti olleet tiukimmat autoteollisuuden valmistamille pienille nopeakäyntisille diesel-moottoreille, mutta viime aikoina paineita on kohdistunut myös suurempiin keskinopeisiin ja hidaskäyntisiin diesel-moottoreihin. Päästörajat ovat erilaisia riippuen moottorin tyypistä, käytetystä polttoaineesta ja paikasta missä moottoria käytetään johtuen erilaisista paikallisista laeista ja asetuksista. Eniten huomiota diesel-moottorin päästöissä täytyy kohdistaa typen oksideihin, savun muodostukseen sekä partikkeleihin. Laskennallisen virtausmekaniikan (CFD) avulla on hyvät mahdollisuudet tutkia diesel-moottorin sylinterissä tapahtuvia ilmiöitä palamisen aikana. CFD on hyödyllinen työkalu arvioitaessa moottorin suorituskykyä ja päästöjen muodostumista. CFD:llä on mahdollista testata erilaisten parametrien ja geometrioiden vaikutusta ilman kalliita moottorinkoeajoja. CFD:tä voidaan käyttää myös opetustarkoituksessa lisäämään paloprosessin tuntemusta. Tulevaisuudessa palamissimuloinnit CFD:llä tulevat epäilemättä olemaan tärkeä osa moottorin kehityksessä. Tässä diplomityössä on tehty palamissimuloinnit kahteen erilaisilla poittoaineenruiskutuslaitteistoilla varustettuun Wärtsilän keskinopeaan diesel-moottoriin. W46 moottorin ruiskutuslaitteisto on perinteinen mekaanisesti ohjattu pumppusuutin ja W46-CR moottorissa on elektronisesti ohjattu 'common rail' ruiskutuslaitteisto. Näiden moottorien ja käytössä olevien ruiskutusprofiilien lisäksi on simuloinneilla testattu erilaisia uusia ruiskutusprofiileja, jotta erityyppisten profiilien hyvät ja huonot ominaisuudet tulisivat selville. Matalalla kuormalla kiinnostuksen kohteena on nokipäästöjen muodostus ja täydellä kuormalla NOx-päästöjen muodostus ja polttoaineen kulutus. Simulointien tulokset osoittivat, että noen muodostusta matalalla kuormalla voidaan selvästi vähentää monivaiheisella ruiskutuksella, jossa yksi ruiskutusjakso jaetaan kahteen tai useampaan jaksoon. Erityisen tehokas noen vähentämisessä vaikuttaa olevan ns. jälkiruiskutus (post injection). Matalat NOx-päästöt ja hyvä polttoaineen kulutus täydellä kuormalla on mahdollista saavuttaaasteittain nostettavalla ruiskutusnopeudella.
Resumo:
PURPOSE OF REVIEW: Current computational neuroanatomy based on MRI focuses on morphological measures of the brain. We present recent methodological developments in quantitative MRI (qMRI) that provide standardized measures of the brain, which go beyond morphology. We show how biophysical modelling of qMRI data can provide quantitative histological measures of brain tissue, leading to the emerging field of in-vivo histology using MRI (hMRI). RECENT FINDINGS: qMRI has greatly improved the sensitivity and specificity of computational neuroanatomy studies. qMRI metrics can also be used as direct indicators of the mechanisms driving observed morphological findings. For hMRI, biophysical models of the MRI signal are being developed to directly access histological information such as cortical myelination, axonal diameters or axonal g-ratio in white matter. Emerging results indicate promising prospects for the combined study of brain microstructure and function. SUMMARY: Non-invasive brain tissue characterization using qMRI or hMRI has significant implications for both research and clinics. Both approaches improve comparability across sites and time points, facilitating multicentre/longitudinal studies and standardized diagnostics. hMRI is expected to shed new light on the relationship between brain microstructure, function and behaviour, both in health and disease, and become an indispensable addition to computational neuroanatomy.
Resumo:
The behavior of the nuclear power plants must be known in all operational situations. Thermal hydraulics computer applications are used to simulate the behavior of the plants. The computer applications must be validated before they can be used reliably. The simulation results are compared against the experimental results. In this thesis a model of the PWR PACTEL steam generator was prepared with the TRAC/RELAP Advanced Computational Engine computer application. The simulation results can be compared against the results of the Advanced Process Simulator analysis software in future. Development of the model of the PWR PACTEL vertical steam generator is introduced in this thesis. Loss of feedwater transient simulation examples were carried out with the model.
Resumo:
Supersonic axial turbine stages typically exhibit lower efficiencies than subsonic axial turbine stages. One reason for the lower efficiency is the occurrence of shock waves. With higher pressure ratios the flow inside the turbine becomes relatively easily supersonic if there is only one turbine stage. Supersonic axial turbines can be designed in smaller physical size compared to subsonic axial turbines of same power. This makes them good candidates for turbochargers in large diesel engines, where space can be a limiting factor. Also the production costs are lower for a supersonic axial turbine stage than for two subsonic stages. Since supersonic axial turbines are typically low reaction turbines, they also create lower axial forces to be compensated with bearings compared to high reaction turbines. The effect of changing the stator-rotor axial gap in a small high (rotational) speed supersonic axial flow turbine is studied in design and off-design conditions. Also the effect of using pulsatile mass flow at the supersonic stator inlet is studied. Five axial gaps (axial space between stator and rotor) are modeled using threedimensional computational fluid dynamics at the design and three axial gaps at the off-design conditions. Numerical reliability is studied in three independent studies. An additional measurement is made with the design turbine geometry at intermediate off-design conditions and is used to increase the reliability of the modelling. All numerical modelling is made with the Navier-Stokes solver Finflo employing Chien’s k ¡ ² turbulence model. The modelling of the turbine at the design and off-design conditions shows that the total-to-static efficiency of the turbine decreases when the axial gap is increased in both design and off-design conditions. The efficiency drops almost linearily at the off-design conditions, whereas the efficiency drop accelerates with increasing axial gap at the design conditions. The modelling of the turbine stator with pulsatile inlet flow reveals that the mass flow pulsation amplitude is decreased at the stator throat. The stator efficiency and pressure ratio have sinusoidal shapes as a function of time. A hysteresis-like behaviour is detected for stator efficiency and pressure ratio as a function of inlet mass flow, over one pulse period. This behaviour arises from the pulsatile inlet flow. It is important to have the smallest possible axial gap in the studied turbine type in order to maximize the efficiency. The results for the whole turbine can also be applied to some extent in similar turbines operating for example in space rocket engines. The use of a supersonic stator in a pulsatile inlet flow is shown to be possible.
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
This thesis presents a three-dimensional, semi-empirical, steady state model for simulating the combustion, gasification, and formation of emissions in circulating fluidized bed (CFB) processes. In a large-scale CFB furnace, the local feeding of fuel, air, and other input materials, as well as the limited mixing rate of different reactants produce inhomogeneous process conditions. To simulate the real conditions, the furnace should be modelled three-dimensionally or the three-dimensional effects should be taken into account. The only available methods for simulating the large CFB furnaces three-dimensionally are semi-empirical models, which apply a relatively coarse calculation mesh and a combination of fundamental conservation equations, theoretical models and empirical correlations. The number of such models is extremely small. The main objective of this work was to achieve a model which can be applied to calculating industrial scale CFB boilers and which can simulate all the essential sub-phenomena: fluid dynamics, reactions, the attrition of particles, and heat transfer. The core of the work was to develop the model frame and the required sub-models for determining the combustion and sorbent reactions. The objective was reached, and the developed model was successfully used for studying various industrial scale CFB boilers combusting different types of fuel. The model for sorbent reactions, which includes the main reactions for calcitic limestones, was applied for studying the new possible phenomena occurring in the oxygen-fired combustion. The presented combustion and sorbent models and principles can be utilized in other model approaches as well, including other empirical and semi-empirical model approaches, and CFD based simulations. The main achievement is the overall model frame which can be utilized for the further development and testing of new sub-models and theories, and for concentrating the knowledge gathered from the experimental work carried out at bench scale, pilot scale and industrial scale apparatus, and from the computational work performed by other modelling methods.
Resumo:
Computational model-based simulation methods were developed for the modelling of bioaffinity assays. Bioaffinity-based methods are widely used to quantify a biological substance in biological research, development and in routine clinical in vitro diagnostics. Bioaffinity assays are based on the high affinity and structural specificity between the binding biomolecules. The simulation methods developed are based on the mechanistic assay model, which relies on the chemical reaction kinetics and describes the forming of a bound component as a function of time from the initial binding interaction. The simulation methods were focused on studying the behaviour and the reliability of bioaffinity assay and the possibilities the modelling methods of binding reaction kinetics provide, such as predicting assay results even before the binding reaction has reached equilibrium. For example, a rapid quantitative result from a clinical bioaffinity assay sample can be very significant, e.g. even the smallest elevation of a heart muscle marker reveals a cardiac injury. The simulation methods were used to identify critical error factors in rapid bioaffinity assays. A new kinetic calibration method was developed to calibrate a measurement system by kinetic measurement data utilizing only one standard concentration. A nodebased method was developed to model multi-component binding reactions, which have been a challenge to traditional numerical methods. The node-method was also used to model protein adsorption as an example of nonspecific binding of biomolecules. These methods have been compared with the experimental data from practice and can be utilized in in vitro diagnostics, drug discovery and in medical imaging.
Resumo:
Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.
Resumo:
Kalman filter is a recursive mathematical power tool that plays an increasingly vital role in innumerable fields of study. The filter has been put to service in a multitude of studies involving both time series modelling and financial time series modelling. Modelling time series data in Computational Market Dynamics (CMD) can be accomplished using the Jablonska-Capasso-Morale (JCM) model. Maximum likelihood approach has always been utilised to estimate the parameters of the JCM model. The purpose of this study is to discover if the Kalman filter can be effectively utilized in CMD. Ensemble Kalman filter (EnKF), with 50 ensemble members, applied to US sugar prices spanning the period of January, 1960 to February, 2012 was employed for this work. The real data and Kalman filter trajectories showed no significant discrepancies, hence indicating satisfactory performance of the technique. Since only US sugar prices were utilized, it would be interesting to discover the nature of results if other data sets are employed.
Resumo:
The advancement of science and technology makes it clear that no single perspective is any longer sufficient to describe the true nature of any phenomenon. That is why the interdisciplinary research is gaining more attention overtime. An excellent example of this type of research is natural computing which stands on the borderline between biology and computer science. The contribution of research done in natural computing is twofold: on one hand, it sheds light into how nature works and how it processes information and, on the other hand, it provides some guidelines on how to design bio-inspired technologies. The first direction in this thesis focuses on a nature-inspired process called gene assembly in ciliates. The second one studies reaction systems, as a modeling framework with its rationale built upon the biochemical interactions happening within a cell. The process of gene assembly in ciliates has attracted a lot of attention as a research topic in the past 15 years. Two main modelling frameworks have been initially proposed in the end of 1990s to capture ciliates’ gene assembly process, namely the intermolecular model and the intramolecular model. They were followed by other model proposals such as templatebased assembly and DNA rearrangement pathways recombination models. In this thesis we are interested in a variation of the intramolecular model called simple gene assembly model, which focuses on the simplest possible folds in the assembly process. We propose a new framework called directed overlap-inclusion (DOI) graphs to overcome the limitations that previously introduced models faced in capturing all the combinatorial details of the simple gene assembly process. We investigate a number of combinatorial properties of these graphs, including a necessary property in terms of forbidden induced subgraphs. We also introduce DOI graph-based rewriting rules that capture all the operations of the simple gene assembly model and prove that they are equivalent to the string-based formalization of the model. Reaction systems (RS) is another nature-inspired modeling framework that is studied in this thesis. Reaction systems’ rationale is based upon two main regulation mechanisms, facilitation and inhibition, which control the interactions between biochemical reactions. Reaction systems is a complementary modeling framework to traditional quantitative frameworks, focusing on explicit cause-effect relationships between reactions. The explicit formulation of facilitation and inhibition mechanisms behind reactions, as well as the focus on interactions between reactions (rather than dynamics of concentrations) makes their applicability potentially wide and useful beyond biological case studies. In this thesis, we construct a reaction system model corresponding to the heat shock response mechanism based on a novel concept of dominance graph that captures the competition on resources in the ODE model. We also introduce for RS various concepts inspired by biology, e.g., mass conservation, steady state, periodicity, etc., to do model checking of the reaction systems based models. We prove that the complexity of the decision problems related to these properties varies from P to NP- and coNP-complete to PSPACE-complete. We further focus on the mass conservation relation in an RS and introduce the conservation dependency graph to capture the relation between the species and also propose an algorithm to list the conserved sets of a given reaction system.
Resumo:
The last two decades have provided a vast opportunity to live and explore the compulsive imaginary world or virtual world through massively multiplayer online role-playing games (MMORPGs). MMORPG gives a wide range of opportunities to its users to participate with multi-players on the same platform, to communicate and to do real time actions. There is a virtual economy in these games which is largely player-driven. In-game currency provides its users to build up their Avatars, to buy or sell the necessary goods to play, survive in the games and so on. As a part of virtual economies generated through EVE Online, this thesis mainly focuses on how the prices of the minerals in EVE Online behave by applying the Jabłonska- Capasso-Morale (JCM) mathematical simulation model. It is to verify up to what degree the model can reproduce the virtual economy behavior. The model is applied to buy and sell prices of two minerals namely, isogen and morphite. The simulation results demonstrate that JCM model ts reasonably well to the mineral prices, which lets us conclude that virtual economies behave similarly to the real ones.
Resumo:
Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.