850 resultados para Biomass dynamic models
Resumo:
Les décisions de localisation sont souvent soumises à des aspects dynamiques comme des changements dans la demande des clients. Pour y répondre, la solution consiste à considérer une flexibilité accrue concernant l’emplacement et la capacité des installations. Même lorsque la demande est prévisible, trouver le planning optimal pour le déploiement et l'ajustement dynamique des capacités reste un défi. Dans cette thèse, nous nous concentrons sur des problèmes de localisation avec périodes multiples, et permettant l'ajustement dynamique des capacités, en particulier ceux avec des structures de coûts complexes. Nous étudions ces problèmes sous différents points de vue de recherche opérationnelle, en présentant et en comparant plusieurs modèles de programmation linéaire en nombres entiers (PLNE), l'évaluation de leur utilisation dans la pratique et en développant des algorithmes de résolution efficaces. Cette thèse est divisée en quatre parties. Tout d’abord, nous présentons le contexte industriel à l’origine de nos travaux: une compagnie forestière qui a besoin de localiser des campements pour accueillir les travailleurs forestiers. Nous présentons un modèle PLNE permettant la construction de nouveaux campements, l’extension, le déplacement et la fermeture temporaire partielle des campements existants. Ce modèle utilise des contraintes de capacité particulières, ainsi qu’une structure de coût à économie d’échelle sur plusieurs niveaux. L'utilité du modèle est évaluée par deux études de cas. La deuxième partie introduit le problème dynamique de localisation avec des capacités modulaires généralisées. Le modèle généralise plusieurs problèmes dynamiques de localisation et fournit de meilleures bornes de la relaxation linéaire que leurs formulations spécialisées. Le modèle peut résoudre des problèmes de localisation où les coûts pour les changements de capacité sont définis pour toutes les paires de niveaux de capacité, comme c'est le cas dans le problème industriel mentionnée ci-dessus. Il est appliqué à trois cas particuliers: l'expansion et la réduction des capacités, la fermeture temporaire des installations, et la combinaison des deux. Nous démontrons des relations de dominance entre notre formulation et les modèles existants pour les cas particuliers. Des expériences de calcul sur un grand nombre d’instances générées aléatoirement jusqu’à 100 installations et 1000 clients, montrent que notre modèle peut obtenir des solutions optimales plus rapidement que les formulations spécialisées existantes. Compte tenu de la complexité des modèles précédents pour les grandes instances, la troisième partie de la thèse propose des heuristiques lagrangiennes. Basées sur les méthodes du sous-gradient et des faisceaux, elles trouvent des solutions de bonne qualité même pour les instances de grande taille comportant jusqu’à 250 installations et 1000 clients. Nous améliorons ensuite la qualité de la solution obtenue en résolvent un modèle PLNE restreint qui tire parti des informations recueillies lors de la résolution du dual lagrangien. Les résultats des calculs montrent que les heuristiques donnent rapidement des solutions de bonne qualité, même pour les instances où les solveurs génériques ne trouvent pas de solutions réalisables. Finalement, nous adaptons les heuristiques précédentes pour résoudre le problème industriel. Deux relaxations différentes sont proposées et comparées. Des extensions des concepts précédents sont présentées afin d'assurer une résolution fiable en un temps raisonnable.
Resumo:
In this paper, we study some dynamic generalized information measures between a true distribution and an observed (weighted) distribution, useful in life length studies. Further, some bounds and inequalities related to these measures are also studied
Resumo:
In this paper, the residual Kullback–Leibler discrimination information measure is extended to conditionally specified models. The extension is used to characterize some bivariate distributions. These distributions are also characterized in terms of proportional hazard rate models and weighted distributions. Moreover, we also obtain some bounds for this dynamic discrimination function by using the likelihood ratio order and some preceding results.
Characterizations of Bivariate Models Using Some Dynamic Conditional Information Divergence Measures
Resumo:
In this article, we study some relevant information divergence measures viz. Renyi divergence and Kerridge’s inaccuracy measures. These measures are extended to conditionally specifiedmodels and they are used to characterize some bivariate distributions using the concepts of weighted and proportional hazard rate models. Moreover, some bounds are obtained for these measures using the likelihood ratio order
Resumo:
Traditionally, we've focussed on the question of how to make a system easy to code the first time, or perhaps on how to ease the system's continued evolution. But if we look at life cycle costs, then we must conclude that the important question is how to make a system easy to operate. To do this we need to make it easy for the operators to see what's going on and to then manipulate the system so that it does what it is supposed to. This is a radically different criterion for success. What makes a computer system visible and controllable? This is a difficult question, but it's clear that today's modern operating systems with nearly 50 million source lines of code are neither. Strikingly, the MIT Lisp Machine and its commercial successors provided almost the same functionality as today's mainstream sytsems, but with only 1 Million lines of code. This paper is a retrospective examination of the features of the Lisp Machine hardware and software system. Our key claim is that by building the Object Abstraction into the lowest tiers of the system, great synergy and clarity were obtained. It is our hope that this is a lesson that can impact tomorrow's designs. We also speculate on how the spirit of the Lisp Machine could be extended to include a comprehensive access control model and how new layers of abstraction could further enrich this model.
Resumo:
The performance of a model-based diagnosis system could be affected by several uncertainty sources, such as,model errors,uncertainty in measurements, and disturbances. This uncertainty can be handled by mean of interval models.The aim of this thesis is to propose a methodology for fault detection, isolation and identification based on interval models. The methodology includes some algorithms to obtain in an automatic way the symbolic expression of the residual generators enhancing the structural isolability of the faults, in order to design the fault detection tests. These algorithms are based on the structural model of the system. The stages of fault detection, isolation, and identification are stated as constraint satisfaction problems in continuous domains and solved by means of interval based consistency techniques. The qualitative fault isolation is enhanced by a reasoning in which the signs of the symptoms are derived from analytical redundancy relations or bond graph models of the system. An initial and empirical analysis regarding the differences between interval-based and statistical-based techniques is presented in this thesis. The performance and efficiency of the contributions are illustrated through several application examples, covering different levels of complexity.
Combining altimetric/gravimetric and ocean model mean dynamic topography models in the GOCINA region
Resumo:
This paper presents a hybrid control strategy integrating dynamic neural networks and feedback linearization into a predictive control scheme. Feedback linearization is an important nonlinear control technique which transforms a nonlinear system into a linear system using nonlinear transformations and a model of the plant. In this work, empirical models based on dynamic neural networks have been employed. Dynamic neural networks are mathematical structures described by differential equations, which can be trained to approximate general nonlinear systems. A case study based on a mixing process is presented.
Resumo:
Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.
Resumo:
Climate controls fire regimes through its influence on the amount and types of fuel present and their dryness. CO2 concentration constrains primary production by limiting photosynthetic activity in plants. However, although fuel accumulation depends on biomass production, and hence on CO2 concentration, the quantitative relationship between atmospheric CO2 concentration and biomass burning is not well understood. Here a fire-enabled dynamic global vegetation model (the Land surface Processes and eXchanges model, LPX) is used to attribute glacial–interglacial changes in biomass burning to an increase in CO2, which would be expected to increase primary production and therefore fuel loads even in the absence of climate change, vs. climate change effects. Four general circulation models provided last glacial maximum (LGM) climate anomalies – that is, differences from the pre-industrial (PI) control climate – from the Palaeoclimate Modelling Intercomparison Project Phase~2, allowing the construction of four scenarios for LGM climate. Modelled carbon fluxes from biomass burning were corrected for the model's observed prediction biases in contemporary regional average values for biomes. With LGM climate and low CO2 (185 ppm) effects included, the modelled global flux at the LGM was in the range of 1.0–1.4 Pg C year-1, about a third less than that modelled for PI time. LGM climate with pre-industrial CO2 (280 ppm) yielded unrealistic results, with global biomass burning fluxes similar to or even greater than in the pre-industrial climate. It is inferred that a substantial part of the increase in biomass burning after the LGM must be attributed to the effect of increasing CO2 concentration on primary production and fuel load. Today, by analogy, both rising CO2 and global warming must be considered as risk factors for increasing biomass burning. Both effects need to be included in models to project future fire risks.
Resumo:
This thesis is composed of three articles with the subjects of macroeconomics and - nance. Each article corresponds to a chapter and is done in paper format. In the rst article, which was done with Axel Simonsen, we model and estimate a small open economy for the Canadian economy in a two country General Equilibrium (DSGE) framework. We show that it is important to account for the correlation between Domestic and Foreign shocks and for the Incomplete Pass-Through. In the second chapter-paper, which was done with Hedibert Freitas Lopes, we estimate a Regime-switching Macro-Finance model for the term-structure of interest rates to study the US post-World War II (WWII) joint behavior of macro-variables and the yield-curve. We show that our model tracks well the US NBER cycles, the addition of changes of regime are important to explain the Expectation Theory of the term structure, and macro-variables have increasing importance in recessions to explain the variability of the yield curve. We also present a novel sequential Monte-Carlo algorithm to learn about the parameters and the latent states of the Economy. In the third chapter, I present a Gaussian A ne Term Structure Model (ATSM) with latent jumps in order to address two questions: (1) what are the implications of incorporating jumps in an ATSM for Asian option pricing, in the particular case of the Brazilian DI Index (IDI) option, and (2) how jumps and options a ect the bond risk-premia dynamics. I show that jump risk-premia is negative in a scenario of decreasing interest rates (my sample period) and is important to explain the level of yields, and that gaussian models without jumps and with constant intensity jumps are good to price Asian options.
Resumo:
This work evaluated kinetic and adsorption physicochemical models for the biosorption process of lanthanum, neodymium, europium, and gadolinium by Sargassum sp. in batch systems. The results showed: (a) the pseudo-second order kinetic model was the best approximation for the experimental data with the metal adsorption initial velocity parameter in 0.042-0.055 mmol.g -1.min-1 (La < Nd < Gd < Eu); (b) the Langmuir adsorption model presented adequate correlation with maximum metal uptake at 0.60-0.70 mmol g-1 (Eu < La < Gd < Nd) and the metal-biomass affinity parameter showed distinct values (Gd < Nd < Eu < La: 183.1, 192.5, 678.3, and 837.3 L g-1, respectively); and (c) preliminarily, the kinetics and adsorption evaluation did not reveal a well-defined metal selectivity behavior for the RE biosorption in Sargassum sp., but they indicate a possible partition among RE studied. © (2009) Trans Tech Publications.
Resumo:
The objective of this work is to develop a non-stoichiometric equilibrium model to study parameter effects in the gasification process of a feedstock in downdraft gasifiers. The non-stoichiometric equilibrium model is also known as the Gibbs free energy minimization method. Four models were developed and tested. First a pure non-stoichiometric equilibrium model called M1 was developed; then the methane content was constrained by correlating experimental data and generating the model M2. A kinetic constraint that determines the apparent gasification rate was considered for model M3 and finally the two aforementioned constraints were implemented together in model M4. Models M2 and M4 showed to be the more accurate among the four developed models with mean RMS (root mean square error) values of 1.25 each.Also the gasification of Brazilian Pinus elliottii in a downdraft gasifier with air as gasification agent was studied. The input parameters considered were: (a) equivalence ratio (0.28-035); (b) moisture content (5-20%); (c) gasification time (30-120 min) and carbon conversion efficiency (80-100%). (C) 2014 Elsevier Ltd. All rights reserved.