990 resultados para Transient Modeling
Resumo:
Advancements in high-throughput technologies to measure increasingly complex biological phenomena at the genomic level are rapidly changing the face of biological research from the single-gene single-protein experimental approach to studying the behavior of a gene in the context of the entire genome (and proteome). This shift in research methodologies has resulted in a new field of network biology that deals with modeling cellular behavior in terms of network structures such as signaling pathways and gene regulatory networks. In these networks, different biological entities such as genes, proteins, and metabolites interact with each other, giving rise to a dynamical system. Even though there exists a mature field of dynamical systems theory to model such network structures, some technical challenges are unique to biology such as the inability to measure precise kinetic information on gene-gene or gene-protein interactions and the need to model increasingly large networks comprising thousands of nodes. These challenges have renewed interest in developing new computational techniques for modeling complex biological systems. This chapter presents a modeling framework based on Boolean algebra and finite-state machines that are reminiscent of the approach used for digital circuit synthesis and simulation in the field of very-large-scale integration (VLSI). The proposed formalism enables a common mathematical framework to develop computational techniques for modeling different aspects of the regulatory networks such as steady-state behavior, stochasticity, and gene perturbation experiments.
Resumo:
In this paper we propose a method for computing JPEG quantization matrices for a given mean square error or PSNR. Then, we employ our method to compute JPEG standard progressive operation mode definition scripts using a quantization approach. Therefore, it is no longer necessary to use a trial and error procedure to obtain a desired PSNR and/or definition script, reducing cost. Firstly, we establish a relationship between a Laplacian source and its uniform quantization error. We apply this model to the coefficients obtained in the discrete cosine transform stage of the JPEG standard. Then, an image may be compressed using the JPEG standard under a global MSE (or PSNR) constraint and a set of local constraints determined by the JPEG standard and visual criteria. Secondly, we study the JPEG standard progressive operation mode from a quantization based approach. A relationship between the measured image quality at a given stage of the coding process and a quantization matrix is found. Thus, the definition script construction problem can be reduced to a quantization problem. Simulations show that our method generates better quantization matrices than the classical method based on scaling the JPEG default quantization matrix. The estimation of PSNR has usually an error smaller than 1 dB. This figure decreases for high PSNR values. Definition scripts may be generated avoiding an excessive number of stages and removing small stages that do not contribute during the decoding process with a noticeable image quality improvement.
Resumo:
The paper presents some contemporary approaches to spatial environmental data analysis. The main topics are concentrated on the decision-oriented problems of environmental spatial data mining and modeling: valorization and representativity of data with the help of exploratory data analysis, spatial predictions, probabilistic and risk mapping, development and application of conditional stochastic simulation models. The innovative part of the paper presents integrated/hybrid model-machine learning (ML) residuals sequential simulations-MLRSS. The models are based on multilayer perceptron and support vector regression ML algorithms used for modeling long-range spatial trends and sequential simulations of the residuals. NIL algorithms deliver non-linear solution for the spatial non-stationary problems, which are difficult for geostatistical approach. Geostatistical tools (variography) are used to characterize performance of ML algorithms, by analyzing quality and quantity of the spatially structured information extracted from data with ML algorithms. Sequential simulations provide efficient assessment of uncertainty and spatial variability. Case study from the Chernobyl fallouts illustrates the performance of the proposed model. It is shown that probability mapping, provided by the combination of ML data driven and geostatistical model based approaches, can be efficiently used in decision-making process. (C) 2003 Elsevier Ltd. All rights reserved.
Resumo:
THESIS ABSTRACT Garnets are one of the key metamorphic minerals used to study peak metamorphic conditions or crystallization ages. Equilibrium is typically assumed between the garnet and the matrix. This thesis attempts to understand garnet growth in the Zermatt-Saas Fee (ZSF) eclogites, and discusses consequences for Sm/Nd and Lu/Hf dating and the equilibrium assumption. All studied garnets from the ZSF eclogites are strongly zoned in Mn, Fe, Mg, and Ca. Methods based on chemical zoning patterns and on 3D spatial statistics indicate different growth mechanisms depending on the sample studied. Garnets from the Pfulwe area are grown in a system where surface kinetics likely dominated over intergranular diffusion kinetics. Garnets fram two other localities, Nuarsax and Lago di Cignana, seem to have grown in a system where intergranular diffusion kinetics were dominating over surface kinetics, at least during initial growth. Garnets reveal strong prograde REE+Y zoning. They contain narrow central peaks for Lu + Yb + Tm ± Er and at least one additional small peak towards the rim. The REE Sm + Eu + Gd + Tb ± Dy are depleted in the cores but show one prominent peak close to the rim. It is shown that these patterns cam be explained using a transient matrix diffusion model where REE uptake is limited by diffusion in the matrix surrounding the porphyroblast. The secondary peaks in the garnet profiles are interpreted to reflect thermally activated diffusion due to a temperature increase during prograde metamorphism. The model predicts anomalously low 176Lu/177Hf and 147Sm/144Nd ratios in garnets where growth rates are fast compared to diffusion of the REE, which decreases garnet isochron precisions. The sharp Lu zoning was further used to constrain maximum Lu volume diffusion rates in garnet. The modeled minimum pre-exponential diffusion coefficient which fits the measured central peak is in the order of Do = 5.7* 106 m2/s, taking an activation energy of 270 kJ/mol. The latter was chosen in agreement with experimentally determined values. This can be used to estimate a minimum closure temperature of around 630°C for the ZSF zone. Zoning of REE was combined with published Lu/Hf and Sm/Nd age information to redefine the prograde crystallization interval for Lago di Cignana UHP eclogites. Modeling revealed that a prograde growth interval in the order of 25 m.y. is needed to produce the measured spread in ages. RÉSUMÉ Le grenat est un minéral métamorphique clé pour déterminer les conditions du pic de métamorphisme ainsi que l'âge de cristallisation. L'équilibre entre le grenat et la matrice est requis. Cette étude a pour but de comprendre la croissance du grenat dans les éclogites de la zone de Zermatt-Saas Fee (ZSF) et d'examiner quelques conséquences sur les datations Sm/Nd et Lu/Hf. Tous les grenats des éclogites de ZSF étudiés sont fortement zonés en Mn, Fe, Mg et partiellement en Ca. Les différentes méthodes basées sur le modèle de zonation chimique ainsi que sur les statistiques de répartition spatiale en 3D indiquent un mécanisme de croissance différent en fonction de la localité d'échantillonnage. Les grenats provenant de la zone de Pfulwe ont probablement crû dans un système principalement dominé par la cinétique de surface au détriment de 1a cinétique de diffusion intergranulaire. Les grenats provenant de deux autres localités, Nuarsax et Lago di Cignana, semblent avoir cristallisé dans un système dominé par la diffusion intergranulaire, au moins durant les premiers stades de croissance. Les grenats montrent une forte zonation prograde en Terres Rares (REE) ainsi qu'en Y. Les profils présentent au coeur un pic étroit en Lu + Yb+ Tm ± Er et au moins un petit pic supplémentaire vers le bord. Les coeurs des grenats sont appauvris en Sm + Eu + Gd + Tb ± Dy, mais les bords sont marqués par un pic important de ces REE. Ces profils s'expliquent par un modèle de diffusion matricielle dans lequel l'apport en REE est limité par la diffusion dans la matrice environnant les porphyroblastes. Les pics secondaires en bordure de grain reflètent la diffusion activée par l'augmentation de la température lors du métamorphisme prograde. Ce modèle prédit des rapports 176Lu/177Hf et 147Sm/144Nd anormalement bas lorsque les taux de croissance sont plus rapides que la diffusion des REE, ce qui diminue la précision des isochrones impliquant le grenat. La zonation nette en Lu a permis de contraindre le maximum de diffusion volumique par une approche numérique. Le coefficient de diffusion minimum modélisé en adéquation avec les pics mesurés est de l'ordre de Do = 5.7*10-6 m2/s, en prenant une énergie d'activation ~270 kJ/mol déterminée expérimentalement. Ainsi, la température de clôture minimale est estimée aux alentours de 630°C pour la zone ZSF. Des nouvelles données de zonation de REE sont combinées aux âges obtenus avec les rapports Lu/Hf et Sm/Nd qui redéfissent l'intervalle de cristallisation prograde pour les éclogites UHP de Lago di Cignana. La modélisation permet d'attribuer au minimum un intervalle de croissance prograde de 25 Ma afin d'obtenir les âges préalablement mesurés. RESUME GRAND PUBLIC L'un des principaux buts du pétrologue .métamorphique est d'extraire des roches les informations sur l'évolution temporelle, thermique et barométrique qu'elles ont subi au cours de la formation d'une chaîne de montagne. Le grenat est l'un des minéraux clés dans une grande variété de roches métamorphiques. Il a fait l'objet de nombreuses études dans des terrains d'origines variées ou lors d'études expérimentales afin de comprendre ses domaines de stabilité, ses réactions et sa coexistence avec d'autres minéraux. Cela fait du grenat l'un des minéraux les plus attractifs pour la datation des roches. Cependant, lorsqu'on l'utilise pour la datation et/ou pour la géothermobarométrie, on suppose toujours que le grenat croît en équilibre avec les phases coexistantes de la matrice. Pourtant, la croissance d'un minéral est en général liée au processus de déséquilibre. Cette étude a pour but de comprendre comment croît le grenat dans les éclogites de Zermatt - Saas Fee et donc d'évaluer le degré de déséquilibre. Il s'agit aussi d'expliquer les différences d'âges obtenues grâce aux grenats dans les différentes localités de l'unité de Zermatt-Saas Fee. La principale question posée lors de l'étude des mécanismes de croissance du grenat est: Parmi les processus en jeu lors de la croissance du grenat (dissolution des anciens minéraux, transport des éléments vers le nouveau grenat, précipitation d'une nouvelle couche en surface du minéral), lequel est le plus lent et ainsi détermine le degré de déséquilibre? En effet, les grenats d'une des localités (Pfulwe) indiquent que le phénomène d'adhérence en surface est le plus lent, contrairement aux grenats des autres localités (Lago di Cignana, Nuarsax) dans lesquels ce sont les processus de transport qui sont les plus lents. Cela montre que les processus dominants sont variables, même dans des roches similaires de la même unité tectonique. Ceci implique que les processus doivent être déterminés individuellement pour chaque roche afin d'évaluer le degré de déséquilibre du grenat dans la roche. Tous les grenats analysés présentent au coeur une forte concentration de Terres Rares: Lu + Yb + Tm ± Er qui décroît vers le bord du grain. Inversement, les Terres Rares Sm + Eu + Gd + Tb ± Dy sont appauvries au coeur et se concentrent en bordure du grain. La modélisation révèle que ces profils sont-dus à des cinétiques lentes de transport des Terres Rares. De plus, les modèles prédisent des concentrations basses en éléments radiogéniques pères dans certaines roches, ce qui influence fortement sur la précision des âges obtenus par la méthode d'isochrone. Ceci signifie que les roches les plus adaptées pour les datations ne doivent contenir ni beaucoup de grenat ni de très gros cristaux, car dans ce cas, la compétition des éléments entre les cristaux limite à de faibles concentrations la quantité d'éléments pères dans chaque cristal.
Resumo:
The safe use of nuclear power plants (NPPs) requires a deep understanding of the functioning of physical processes and systems involved. Studies on thermal hydraulics have been carried out in various separate effects and integral test facilities at Lappeenranta University of Technology (LUT) either to ensure the functioning of safety systems of light water reactors (LWR) or to produce validation data for the computer codes used in safety analyses of NPPs. Several examples of safety studies on thermal hydraulics of the nuclear power plants are discussed. Studies are related to the physical phenomena existing in different processes in NPPs, such as rewetting of the fuel rods, emergency core cooling (ECC), natural circulation, small break loss-of-coolant accidents (SBLOCA), non-condensable gas release and transport, and passive safety systems. Studies on both VVER and advanced light water reactor (ALWR) systems are included. The set of cases include separate effects tests for understanding and modeling a single physical phenomenon, separate effects tests to study the behavior of a NPP component or a single system, and integral tests to study the behavior of the whole system. In the studies following steps can be found, not necessarily in the same study. Experimental studies as such have provided solutions to existing design problems. Experimental data have been created to validate a single model in a computer code. Validated models are used in various transient analyses of scaled facilities or NPPs. Integral test data are used to validate the computer codes as whole, to see how the implemented models work together in a code. In the final stage test results from the facilities are transferred to the NPP scale using computer codes. Some of the experiments have confirmed the expected behavior of the system or procedure to be studied; in some experiments there have been certain unexpected phenomena that have caused changes to the original design to avoid the recognized problems. This is the main motivation for experimental studies on thermal hydraulics of the NPP safety systems. Naturally the behavior of the new system designs have to be checked with experiments, but also the existing designs, if they are applied in the conditions that differ from what they were originally designed for. New procedures for existing reactors and new safety related systems have been developed for new nuclear power plant concepts. New experiments have been continuously needed.
Resumo:
The Wigner higher order moment spectra (WHOS)are defined as extensions of the Wigner-Ville distribution (WD)to higher order moment spectra domains. A general class oftime-frequency higher order moment spectra is also defined interms of arbitrary higher order moments of the signal as generalizations of the Cohen’s general class of time-frequency representations. The properties of the general class of time-frequency higher order moment spectra can be related to theproperties of WHOS which are, in fact, extensions of the properties of the WD. Discrete time and frequency Wigner higherorder moment spectra (DTF-WHOS) distributions are introduced for signal processing applications and are shown to beimplemented with two FFT-based algorithms. One applicationis presented where the Wigner bispectrum (WB), which is aWHOS in the third-order moment domain, is utilized for thedetection of transient signals embedded in noise. The WB iscompared with the WD in terms of simulation examples andanalysis of real sonar data. It is shown that better detectionschemes can be derived, in low signal-to-noise ratio, when theWB is applied.
Resumo:
The software development industry is constantly evolving. The rise of the agile methodologies in the late 1990s, and new development tools and technologies require growing attention for everybody working within this industry. The organizations have, however, had a mixture of various processes and different process languages since a standard software development process language has not been available. A promising process meta-model called Software & Systems Process Engineering Meta- Model (SPEM) 2.0 has been released recently. This is applied by tools such as Eclipse Process Framework Composer, which is designed for implementing and maintaining processes and method content. Its aim is to support a broad variety of project types and development styles. This thesis presents the concepts of software processes, models, traditional and agile approaches, method engineering, and software process improvement. Some of the most well-known methodologies (RUP, OpenUP, OpenMethod, XP and Scrum) are also introduced with a comparison provided between them. The main focus is on the Eclipse Process Framework and SPEM 2.0, their capabilities, usage and modeling. As a proof of concept, I present a case study of modeling OpenMethod with EPF Composer and SPEM 2.0. The results show that the new meta-model and tool have made it possible to easily manage method content, publish versions with customized content, and connect project tools (such as MS Project) with the process content. The software process modeling also acts as a process improvement activity.
Resumo:
Tämän tutkimustyön kohteena on TietoEnator Oy:n kehittämän Fenix-tietojärjestelmän kapasiteettitarpeen ennustaminen. Työn tavoitteena on tutustua Fenix-järjestelmän eri osa-alueisiin, löytää tapa eritellä ja mallintaa eri osa-alueiden vaikutus järjestelmän kuormitukseen ja selvittää alustavasti mitkä parametrit vaikuttavat kyseisten osa-alueiden luomaan kuormitukseen. Osa tätä työtä on tutkia eri vaihtoehtoja simuloinnille ja selvittää eri vaihtoehtojen soveltuvuus monimutkaisten järjestelmien mallintamiseen. Kerätyn tiedon pohjaltaluodaan järjestelmäntietovaraston kuormitusta kuvaava simulaatiomalli. Hyödyntämällä mallista saatua tietoa ja tuotantojärjestelmästä mitattua tietoa mallia kehitetään vastaamaan yhä lähemmin todellisen järjestelmän toimintaa. Mallista tarkastellaan esimerkiksi simuloitua järjestelmäkuormaa ja jonojen käyttäytymistä. Tuotantojärjestelmästä mitataan eri kuormalähteiden käytösmuutoksia esimerkiksi käyttäjämäärän ja kellonajan suhteessa. Tämän työn tulosten on tarkoitus toimia pohjana myöhemmin tehtävälle jatkotutkimukselle, jossa osa-alueiden parametrisointia tarkennetaan lisää, mallin kykyä kuvata todellista järjestelmää tehostetaanja mallin laajuutta kasvatetaan.
Resumo:
Panel data can be arranged into a matrix in two ways, called 'long' and 'wide' formats (LFand WF). The two formats suggest two alternative model approaches for analyzing paneldata: (i) univariate regression with varying intercept; and (ii) multivariate regression withlatent variables (a particular case of structural equation model, SEM). The present papercompares the two approaches showing in which circumstances they yield equivalent?insome cases, even numerically equal?results. We show that the univariate approach givesresults equivalent to the multivariate approach when restrictions of time invariance (inthe paper, the TI assumption) are imposed on the parameters of the multivariate model.It is shown that the restrictions implicit in the univariate approach can be assessed bychi-square difference testing of two nested multivariate models. In addition, commontests encountered in the econometric analysis of panel data, such as the Hausman test, areshown to have an equivalent representation as chi-square difference tests. Commonalitiesand differences between the univariate and multivariate approaches are illustrated usingan empirical panel data set of firms' profitability as well as a simulated panel data.
Resumo:
Tämän diplomityön päämääränä oli tutkia Perloksen teknologiaosaamisia. Perloksen tavoitteena on tulevaisuudessa yhdistää ja soveltaa uusia teknologioita ja älykkäitä materiaaleja muovimekaniikkaan.Ideana oli mallintaa Perloksen osaamisia ja osaamisgapeja ottaen huomioon heidän tulevaisuuden visionsa. Projektituotteena osaamisten mallintamisessa oli Perlos Healthcaren asiakkaan analysoiva mittauslaite. Tutkimuksen arvo on huomattava sillä tunnistamalla osaamisensa ja kyvykkyytensä yritys pystyy luomaan paremman tarjooman vastatessaan koko ajan kasvaviin asiakasvaatimuksiin. Tutkimus on osa TEKESin rahoittamaa LIIMA -projektia. Työn ensimmäisessä osassa esitellään osaamiseen ja partneroitumiseen liittyviä teorioita. Osaamisten mallintaminen tehtiin Excel -pohjaisella työkalulla. Se sisältää projektituotteeseen liittyen osaamisriippuvuuksien mallintamisen ja gap -analyysin. Yhtenä tutkimusmetodina käytettiin haastattelututkimusta. Työ ja sen tulokset antavat operatiivista hyötyä teknologioiden ja markkinoiden välisessä kentässä.
Resumo:
Tutkimuksen tavoitteena on tutkia telekommunikaatioalalla toimivan kohdeyrityksen ohjelmistojen toimitusprosessia° Tutkimus keskittyy mallintamaan toimitusprosessin, määrittelemään roolit ja vastuualueet, havaitsemaan ongelmakohdat ja ehdottamaan prosessille kehityskohteita. Näitä tavoitteita tarkastellaan teoreettisten prosessimallinnustekniikoiden ja tietojohtamisen SECI-prosessikehyksen läpi. Tärkein tiedonkeruun lähde oli haastatteluihin perustuva tutkimus, johon osallistuvat kaikki kohdeprosessiin kuuluvat yksiköt. Mallinnettu toimitusprosessi antoi kohdeyritykselle paremman käsityksen tarkasteltavasta prosessista ja siinä toimivien yksiköiden rooleistaja vastuualueista. Parannusehdotuksia olivat tiedonjaon kanavoinnin määritteleminen, luottamuksen ja sosiaalisten verkostojen parantaminen, ja tietojohtamisen laajamittainen implementointi.
Resumo:
Abstract:The objective of this work was to develop and validate a prognosis system for volume yield and basal area of intensively managed loblolly pine (Pinus taeda) stands, using stand and diameter class models compatible in basal area estimates. The data used in the study were obtained from plantations located in northern Uruguay. For model validation without data loss, a three-phase validation scheme was applied: first, the equations were fitted without the validation database; then, model validation was carried out; and, finally, the database was regrouped to recalibrate the parameter values. After the validation and final parameterization of the models, a simulation of the first commercial thinning was carried out. The developed prognosis system was precise and accurate in estimating basal area production per hectare or per diameter classes. There was compatibility in basal area estimates between diameter class and whole stand models, with a mean difference of -0.01 m2ha-1. The validation scheme applied is logic and consistent, since information on the accuracy and precision of the models is obtained without the loss of any information in the estimation of the models' parameters.
Resumo:
Usingof belt for high precision applications has become appropriate because of the rapid development in motor and drive technology as well as the implementation of timing belts in servo systems. Belt drive systems provide highspeed and acceleration, accurate and repeatable motion with high efficiency, long stroke lengths and low cost. Modeling of a linear belt-drive system and designing its position control are examined in this work. Friction phenomena and position dependent elasticity of the belt are analyzed. Computer simulated results show that the developed model is adequate. The PID control for accurate tracking control and accurate position control is designed and applied to the real test setup. Both the simulation and the experimental results demonstrate that the designed controller meets the specified performance specifications.