963 resultados para Dynamic Models
Resumo:
The spatial and temporal dynamics in the stream water NO3-N concentrations in a major European river-system, the Garonne (62,700 km(2)), are described and related to variations in climate, land management, and effluent point-sources using multivariate statistics. Building on this, the Hydrologiska Byrans Vattenbalansavdelning (HBV) rainfall-runoff model and the Integrated Catchment Model of Nitrogen (INCA-N) are applied to simulate the observed flow and N dynamics. This is done to help us to understand which factors and processes control the flow and N dynamics in different climate zones and to assess the relative inputs from diffuse and point sources across the catchment. This is the first application of the linked HBV and INCA-N models to a major European river system commensurate with the largest basins to be managed tinder the Water Framework Directive. The simulations suggest that in the lowlands, seasonal patterns in the stream water NO3-N concentrations emerge and are dominated by diffuse agricultural inputs, with an estimated 75% of the river load in the lowlands derived from arable farming. The results confirm earlier European catchment studies. Namely, current semi-distrubuted catchment-scale dynamic models, which integrate variations in land cover, climate, and a simple representation of the terrestrial and in-stream N cycle, are able to simulate seasonal NO3-N patterns at large spatial (> 300 km(2)) and temporal (>= monthly) scales using available national datasets.
Resumo:
In the past decade, a number of mechanistic, dynamic simulation models of several components of the dairy production system have become available. However their use has been limited due to the detailed technical knowledge and special software required to run them, and the lack of compatibility between models in predicting various metabolic processes in the animal. The first objective of the current study was to integrate the dynamic models of [Brit. J. Nutr. 72 (1994) 679] on rumen function, [J. Anim. Sci. 79 (2001) 1584] on methane production, [J. Anim. Sci. 80 (2002) 2481 on N partition, and a new model of P partition. The second objective was to construct a decision support system to analyse nutrient partition between animal and environment. The integrated model combines key environmental pollutants such as N, P and methane within a nutrient-based feed evaluation system. The model was run under different scenarios and the sensitivity of various parameters analysed. A comparison of predictions from the integrated model with the original simulation models showed an improvement in N excretion since the integrated model uses the dynamic model of [Brit. J. Nutr. 72 (1994) 6791 to predict microbial N, which was not represented in detail in the original model. The integrated model can be used to investigate the degree to which production and environmental objectives are antagonistic, and it may help to explain and understand the complex mechanisms involved at the ruminal and metabolic levels. A part of the integrated model outputs were the forms of N and P in excreta and methane, which can be used as indices of environmental pollution. (C) 2004 Elsevier B.V All rights reserved.
Resumo:
Many communication signal processing applications involve modelling and inverting complex-valued (CV) Hammerstein systems. We develops a new CV B-spline neural network approach for efficient identification of the CV Hammerstein system and effective inversion of the estimated CV Hammerstein model. Specifically, the CV nonlinear static function in the Hammerstein system is represented using the tensor product from two univariate B-spline neural networks. An efficient alternating least squares estimation method is adopted for identifying the CV linear dynamic model’s coefficients and the CV B-spline neural network’s weights, which yields the closed-form solutions for both the linear dynamic model’s coefficients and the B-spline neural network’s weights, and this estimation process is guaranteed to converge very fast to a unique minimum solution. Furthermore, an accurate inversion of the CV Hammerstein system can readily be obtained using the estimated model. In particular, the inversion of the CV nonlinear static function in the Hammerstein system can be calculated effectively using a Gaussian-Newton algorithm, which naturally incorporates the efficient De Boor algorithm with both the B-spline curve and first order derivative recursions. The effectiveness of our approach is demonstrated using the application to equalisation of Hammerstein channels.
Resumo:
Although there has been substantial research on long-run co-movement (common trends) in the empirical macroeconomics literature. little or no work has been done on short run co-movement (common cycles). Investigating common cycles is important on two grounds: first. their existence is an implication of most dynamic macroeconomic models. Second. they impose important restrictions on dynamic systems. Which can be used for efficient estimation and forecasting. In this paper. using a methodology that takes into account short- and long-run co-movement restrictions. we investigate their existence in a multivariate data set containing U.S. per-capita output. consumption. and investment. As predicted by theory. the data have common trends and common cycles. Based on the results of a post-sample forecasting comparison between restricted and unrestricted systems. we show that a non-trivial loss of efficiency results when common cycles are ignored. If permanent shocks are associated with changes in productivity. the latter fails to be an important source of variation for output and investment contradicting simple aggregate dynamic models. Nevertheless. these shocks play a very important role in explaining the variation of consumption. Showing evidence of smoothing. Furthermore. it seems that permanent shocks to output play a much more important role in explaining unemployment fluctuations than previously thought.
Resumo:
Este trabalho discute a racionalidade econômica para o desenvolvimento de um sistema de metas sociais como forma de o governo federal aumentar a eficiência na utilização dos recursos sociais transferidos para os municípios. O trabalho desenvolve algumas extensões do modelo de agente-principal incluindo abordagens estáticas com e sem informação imperfeita e abordagens dinâmicas com contratos perfeitos e imperfeitos. Os resultados dos modelos estáticos indicam que o uso de critérios usuais de focalização onde localidades mais pobres recebem mais recursos pode levar a incentivos adversos para a erradicação da pobreza. Nós também mostramos que transferências incondicionais do governo federal deslocam gastos sociais locais. O trabalho argumenta em favor do uso de contratos onde quanto maior for a melhora no indicador social escolhido, mais recursos o município receberia. A introdução de informação imperfeita neste modelo basicamente gera uma penalidade aos segmentos pobres de áreas onde os governos demonstram ser menos avessos a pobreza. O trabalho também aborda o problema de favoritismo político onde determinados grupos sociais têm maior, ou menor, atenção por parte de governos locais. O resultado é que as políticas sociais acabam privilegiando determinados setores em detrimento de outros. Com o estabelecimento de metas sociais é possível, se não eliminar o problema, ao menos criar incentivos corretos para que os gastos sociais sejam distribuídos de forma mais equânime. Também desenvolvemos modelos dinâmicos com diferentes possibilidades de renegociação ao longo do tempo. Demonstramos que a melhor forma de aumentar a eficiência alocativa dos fundos seria criar mecanismos institucionais garantindo a impossibilidade de renegociações bilaterais. Esse contrato ótimo reproduz a seqüência de metas e transferências de vários períodos encontrada na solução do modelo estático. Entretanto, esse resultado- desaparece quando incorporamos contratos incompletos. Nesse caso, as ineficiências ex-ante criadas pela possibilidade de renegociação devem ser comparadas com as ineficiências ex-post criadas por não se usar a informação nova revelada ao longo do processo. Finalmente, introduzimos a possibilidade do resultado social observado depender não só do investimento realizado, mas também da presença de choques. Nesse caso, tanto o governo quanto o município aumentam as suas metas de investimento na área social. Contratos lineares na presença de choques negativos fazem com que os municípios recebem menos recursos justamente em situações adversas. Para contornar esse problema, mostramos a importância da utilização de contratos com comparação de performance.
Resumo:
It is well known that cointegration between the level of two variables (labeled Yt and yt in this paper) is a necessary condition to assess the empirical validity of a present-value model (PV and PVM, respectively, hereafter) linking them. The work on cointegration has been so prevalent that it is often overlooked that another necessary condition for the PVM to hold is that the forecast error entailed by the model is orthogonal to the past. The basis of this result is the use of rational expectations in forecasting future values of variables in the PVM. If this condition fails, the present-value equation will not be valid, since it will contain an additional term capturing the (non-zero) conditional expected value of future error terms. Our article has a few novel contributions, but two stand out. First, in testing for PVMs, we advise to split the restrictions implied by PV relationships into orthogonality conditions (or reduced rank restrictions) before additional tests on the value of parameters. We show that PV relationships entail a weak-form common feature relationship as in Hecq, Palm, and Urbain (2006) and in Athanasopoulos, Guillén, Issler and Vahid (2011) and also a polynomial serial-correlation common feature relationship as in Cubadda and Hecq (2001), which represent restrictions on dynamic models which allow several tests for the existence of PV relationships to be used. Because these relationships occur mostly with nancial data, we propose tests based on generalized method of moment (GMM) estimates, where it is straightforward to propose robust tests in the presence of heteroskedasticity. We also propose a robust Wald test developed to investigate the presence of reduced rank models. Their performance is evaluated in a Monte-Carlo exercise. Second, in the context of asset pricing, we propose applying a permanent-transitory (PT) decomposition based on Beveridge and Nelson (1981), which focus on extracting the long-run component of asset prices, a key concept in modern nancial theory as discussed in Alvarez and Jermann (2005), Hansen and Scheinkman (2009), and Nieuwerburgh, Lustig, Verdelhan (2010). Here again we can exploit the results developed in the common cycle literature to easily extract permament and transitory components under both long and also short-run restrictions. The techniques discussed herein are applied to long span annual data on long- and short-term interest rates and on price and dividend for the U.S. economy. In both applications we do not reject the existence of a common cyclical feature vector linking these two series. Extracting the long-run component shows the usefulness of our approach and highlights the presence of asset-pricing bubbles.
Resumo:
Este trabalho investiga como os padrões de compras de consumidores de bens estocáveis são afetados por suas expectativas de preços. Usando um modelo dinâmico padrão de maximização da utilidade, deriva-se uma expressão analítica para as compras dos consumidores como uma função das suas expectativas em relação aos preços futuros. Em seguida, uma versão mais tratável do modelo é construída, de forma a ilustrar graficamente como os diferentes tipos de expectativas de preços implicam diferentes padrões de compras dos consumidores. Além disso, na aplicação empírica, investigo qual o modelo de expectativas de preços, entre aqueles comumente utilizados na literatura, é consistente com os dados. Por fim, encontra-se suficiente heterogeneidade em expectativa de preços dos consumidores. Mostra-se que famílias de pequeno porte acreditam que os preços seguem um processo de Markov de primeira ordem, enquanto famílias de alta renda são racionais.
Resumo:
In this dissertation, different ways of combining neural predictive models or neural-based forecasts are discussed. The proposed approaches consider mostly Gaussian radial basis function networks, which can be efficiently identified and estimated through recursive/adaptive methods. Two different ways of combining are explored to get a final estimate – model mixing and model synthesis –, with the aim of obtaining improvements both in terms of efficiency and effectiveness. In the context of model mixing, the usual framework for linearly combining estimates from different models is extended, to deal with the case where the forecast errors from those models are correlated. In the context of model synthesis, and to address the problems raised by heavily nonstationary time series, we propose hybrid dynamic models for more advanced time series forecasting, composed of a dynamic trend regressive model (or, even, a dynamic harmonic regressive model), and a Gaussian radial basis function network. Additionally, using the model mixing procedure, two approaches for decision-making from forecasting models are discussed and compared: either inferring decisions from combined predictive estimates, or combining prescriptive solutions derived from different forecasting models. Finally, the application of some of the models and methods proposed previously is illustrated with two case studies, based on time series from finance and from tourism.
Resumo:
In recent years, many researchers in the field of biomedical sciences have made successful use of mathematical models to study, in a quantitative way, a multitude of phenomena such as those found in disease dynamics, control of physiological systems, optimization of drug therapy, economics of the preventive medicine and many other applications. The availability of good dynamic models have been providing means for simulation and design of novel control strategies in the context of biological events. This work concerns a particular model related to HIV infection dynamics which is used to allow a comparative evaluation of schemes for treatment of AIDS patients. The mathematical model adopted in this work was proposed by Nowak & Bangham, 1996 and describes the dynamics of viral concentration in terms of interaction with CD4 cells and the cytotoxic T lymphocytes, which are responsible for the defense of the organism. Two conceptually distinct techniques for drug therapy are analyzed: Open Loop Treatment, where a priori fixed dosage is prescribed and Closed Loop Treatment, where the doses are adjusted according to results obtained by laboratory analysis. Simulation results show that the Closed Loop Scheme can achieve improved quality of the treatment in terms of reduction in the viral load and quantity of administered drugs, but with the inconvenience related to the necessity of frequent and periodic laboratory analysis.
Resumo:
This work analyses a real time orbit estimator using the raw navigation solution provided by GPS receivers. The estimation algorithm considers a Kalman filter with a rather simple orbit dynamic model and random walk modeling of the receiver clock bias and drift. Using the Topex/Poseidon satellite as test bed, characteristics of model truncation, sampling rates and degradation of the GPS receiver (Selective Availability) were analysed. Copyright © 2007 by ABCM.
Resumo:
The cost of maintenance makes up a large part of total energy costs in ruminants. Metabolizable energy (ME) requirement for maintenance (MEm) is the daily ME intake that exactly balances heat energy (HE). The net energy requirement for maintenance (NEm) is estimated subtracting MEm from the HE produced by the processing of the diet. Men cannot be directly measured experimentally and is estimated by measuring basal metabolism in fasted animals or by regression measuring the recovered energy in fed animals. MEm and NEm usually, but not always, are expressed in terms of BW0.75. However, this scaling factor is substantially empirical and its exponent is often inadequate, especially for growing animals. MEm estimated by different feeding systems (AFRC, CNCPS, CSIRO, INRA, NRC) were compared by using dairy cattle data. The comparison showed that these systems differ in the approaches used to estimate MEm and for its quantification. The CSIRO system estimated the highest MEm, mostly because it includes a correction factor to increase ME as the feeding level increases. Relative to CSIRO estimates, those of NRC, INRA, CNCPS, and AFRC were on average 0.92, 0.86, 0.84, and 0.78, respectively. MEm is affected by the previous nutritional history of the animals. This phenomenon is best predicted by dynamic models, of which several have been published in the last decades. They are based either on energy flows or on nutrient flows. Some of the different approaches used were described and discussed.
Resumo:
In this paper, a trajectory tracking control problem for a nonholonomic mobile robot by the integration of a kinematic neural controller (KNC) and a torque neural controller (TNC) is proposed, where both the kinematic and dynamic models contains disturbances. The KNC is a variable structure controller (VSC) based on the sliding mode control theory (SMC), and applied to compensate the kinematic disturbances. The TNC is a inertia-based controller constituted of a dynamic neural controller (DNC) and a robust neural compensator (RNC), and applied to compensate the mobile robot dynamics, and bounded unknown disturbances. Stability analysis with basis on Lyapunov method and simulations results are provided to show the effectiveness of the proposed approach. © 2012 Springer-Verlag.
Resumo:
Muitas são os fatores, apontadas pela literatura pertinente, acerca das causas do desmatamento da Amazônia Legal brasileira. Desde aspectos endógenos como as condições edafo-climáticas, a aspectos relacionados à ação antrópica como os movimentos populacionais, o crescimento urbano e, em especial, as ações autônomas ou induzidas dos diversos agentes econômicos públicos e privados que têm atuado na região, configurando historicamente os processos de ocupação do solo e aproveitamento econômico do espaço amazônico. Este artigo tem como objetivo realizar um teste de causalidade, no sentido de Granger, nas principais variáveis sugeridas como importantes para explicar o desmatamento da Amazônia Legal, no período de 1997 a 2006. A metodologia a ser empregada se baseia em modelos dinâmicos para dados em painel, desenvolvidos por Holtz-Eakin et al. (1988) e Arellano-Bond (1991), que desenvolveram um teste de causalidade baseado no artigo seminal de Granger (1969). Entre os principais resultados obtidos está a constatação empírica de que existe uma causalidade bidirecional entre desmatamento e as áreas de culturas permanente e temporária, bem como o tamanho do rebanho bovino.
Resumo:
Nesta dissertação apresenta-se o problema de redução de ordem de modelos dinâmicos lineares, sob o ponto de vista de otimização via Algoritmos Genéticos. Uma função custo, obtida a partir da norma dos coeficientes do numerador da função de transferência do erro entre o modelo original e o reduzido, e minimizada por meio de um algoritmo genético, com consequente calculo dos parâmetros do modelo reduzido. O procedimento e aplicado em alguns exemplos que demonstram a validade da abordagem.
Resumo:
Esta dissertação apresenta uma metodologia baseada em algoritmo genético (AG) para determinar modelos dinâmicos equivalentes de parques eólicos com geradores de indução em gaiola de esquilo ( GIGE) e geradores de indução duplamente alimentados ( GIDA), apresentando parâmetros elétricos e mecânicos distintos. A técnica se baseia em uma formulação multiobjetiva solucionada por um AG para minimizar os erros quadráticos das potências ativa e reativa entre modelo de um único gerador equivalente e o modelo do parque eólico investigado. A influência do modelo equivalente do parque eólico no comportamento dinâmico dos geradores síncronos é também investigada por meio do método proposto. A abordagem é testada em um parque eólico de 10MW composto por quatro turbinas eólicas ( 2x2MW e 2x3MW), consistindo alternadamente de geradores GIGE e GIDA interligados a uma barra infinita e posteriormente a rede elétrica do IEEE 14 barras. Os resultados obtidos pelo uso do modelo dinâmico detalhado para a representação do parque eólico são comparados aos do modelo equivalente proposto para avaliar a precisão e o custo computacional do modelo proposto.