946 resultados para dynamic stochastic general equilibrium models
Resumo:
Human activities are altering greenhouse gas concentrations in the atmosphere and causing global climate change. The issue of impacts of human-induced climate change has become increasingly important in recent years. The objective of this work was to develop a database of climate information of the future scenarios using a Geographic Information System (GIS) tools. Future scenarios focused on the decades of the 2020?s, 2050?s, and 2080?s (scenarios A2 and B2) were obtained from the General Circulation Models (GCM) available on Data Distribution Centre from the Third Assessment Report (TAR) of Intergovernmental Panel on Climate Change (IPCC). The TAR is compounded by six GCM with different spatial resolutions (ECHAM4:2.8125×2.8125º, HadCM3: 3.75×2.5º, CGCM2: 3.75×3.75º, CSIROMk2b: 5.625×3.214º, and CCSR/NIES: 5.625×5.625º). The mean monthly of the climate variables was obtained by the average from the available models using the GIS spatial analysis tools (arithmetic operation). Maps of mean monthly variables of mean temperature, minimum temperature, maximum temperature, rainfall, relative humidity, and solar radiation were elaborated adopting the spatial resolution of 0.5° X 0.5° latitude and longitude. The method of elaborating maps using GIS tools allowed to evaluate the spatial and distribution of future climate assessments. Nowadays, this database is being used in studies of impacts of climate change on plant disease of Embrapa projects.
Resumo:
Material suplementar está disponível em: http://journal.frontiersin.org/article/10.3389/fpsyg. 2016.01509
Resumo:
Dissertação de mest. em Ciências Económicas e Empresariais, Unidade de Ciências Económicas e Empresariais, Univ. do Algarve, 1996
Resumo:
This paper shows that the proposed Rician shadowed model for multi-antenna communications allows for the unification of a wide set of models, both for multiple-input multiple-output (MIMO) and single- input single-output (SISO) communications. The MIMO Rayleigh and MIMO Rician can be deduced from the MIMO Rician shadowed, and so their SISO counterparts. Other more general SISO models, besides the Rician shadowed, are included in the model, such as the κ-μ, and its recent generalization, the κ-μ shadowed model. Moreover, the SISO η-μ and Nakagami-q models are also included in the MIMO Rician shadowed model. The literature already presents the probability density function (pdf) of the Rician shadowed Gram channel matrix in terms of the well-known gamma- Wishart distribution. We here derive its moment generating function in a tractable form. Closed- form expressions for the cumulative distribution function and the pdf of the maximum eigenvalue are also carried out.
Resumo:
Oil production and exploration techniques have evolved in the last decades in order to increase fluid flows and optimize how the required equipment are used. The base functioning of Electric Submersible Pumping (ESP) lift method is the use of an electric downhole motor to move a centrifugal pump and transport the fluids to the surface. The Electric Submersible Pumping is an option that has been gaining ground among the methods of Artificial Lift due to the ability to handle a large flow of liquid in onshore and offshore environments. The performance of a well equipped with ESP systems is intrinsically related to the centrifugal pump operation. It is the pump that has the function to turn the motor power into Head. In this present work, a computer model to analyze the three-dimensional flow in a centrifugal pump used in Electric Submersible Pumping has been developed. Through the commercial program, ANSYS® CFX®, initially using water as fluid flow, the geometry and simulation parameters have been defined in order to obtain an approximation of what occurs inside the channels of the impeller and diffuser pump in terms of flow. Three different geometry conditions were initially tested to determine which is most suitable to solving the problem. After choosing the most appropriate geometry, three mesh conditions were analyzed and the obtained values were compared to the experimental characteristic curve of Head provided by the manufacturer. The results have approached the experimental curve, the simulation time and the model convergence were satisfactory if it is considered that the studied problem involves numerical analysis. After the tests with water, oil was used in the simulations. The results were compared to a methodology used in the petroleum industry to correct viscosity. In general, for models with water and oil, the results with single-phase fluids were coherent with the experimental curves and, through three-dimensional computer models, they are a preliminary evaluation for the analysis of the two-phase flow inside the channels of centrifugal pump used in ESP systems
Resumo:
Company valuation models attempt to estimate the value of a company in two stages: (1) comprising of a period of explicit analysis and (2) based on unlimited production period of cash flows obtained through a mathematical approach of perpetuity, which is the terminal value. In general, these models, whether they belong to the Dividend Discount Model (DDM), the Discount Cash Flow (DCF), or RIM (Residual Income Models) group, discount one attribute (dividends, free cash flow, or results) to a given discount rate. This discount rate, obtained in most cases by the CAPM (Capital asset pricing model) or APT (Arbitrage pricing theory) allows including in the analysis the cost of invested capital based on the risk taking of the attributes. However, one cannot ignore that the second stage of valuation that is usually 53-80% of the company value (Berkman et al., 1998) and is loaded with uncertainties. In this context, particular attention is needed to estimate the value of this portion of the company, under penalty of the assessment producing a high level of error. Mindful of this concern, this study sought to collect the perception of European and North American financial analysts on the key features of the company that they believe contribute most to its value. For this feat, we used a survey with closed answers. From the analysis of 123 valid responses using factor analysis, the authors conclude that there is great importance attached (1) to the life expectancy of the company, (2) to liquidity and operating performance, (3) to innovation and ability to allocate resources to R&D, and (4) to management capacity and capital structure, in determining the value of a company or business in long term. These results contribute to our belief that we can formulate a model for valuating companies and businesses where the results to be obtained in the evaluations are as close as possible to those found in the stock market
Resumo:
Spectral albedo was measured along a 6 km transect near the Allan Hills in East Antarctica. The transect traversed the sequence from new snow through old snow, firn, and white ice, to blue ice, showing a systematic progression of decreasing albedo at all wavelengths, as well as decreasing specific surface area (SSA) and increasing density. Broadband albedos under clear-sky range from 0.80 for snow to 0.57 for blue ice, and from 0.87 to 0.65 under cloud. Both air bubbles and cracks scatter sunlight; their contributions to SSA were determined by microcomputed tomography on core samples of the ice. Although albedo is governed primarily by the SSA (and secondarily by the shape) of bubbles or snow grains, albedo also correlates highly with porosity, which, as a proxy variable, would be easier for ice sheet models to predict than bubble sizes. Albedo parameterizations are therefore developed as a function of density for three broad wavelength bands commonly used in general circulation models: visible, near-infrared, and total solar. Relevance to Snowball Earth events derives from the likelihood that sublimation of equatorward-flowing sea glaciers during those events progressively exposed the same sequence of surface materials that we measured at Allan Hills, with our short 6 km transect representing a transect across many degrees of latitude on the Snowball ocean. At the equator of Snowball Earth, climate models predict thick ice, or thin ice, or open water, depending largely on their albedo parameterizations; our measured albedos appear to be within the range that favors ice hundreds of meters thick. Citation:
Resumo:
Cardiovascular disease is one of the leading causes of death around the world. Resting heart rate has been shown to be a strong and independent risk marker for adverse cardiovascular events and mortality, and yet its role as a predictor of risk is somewhat overlooked in clinical practice. With the aim of highlighting its prognostic value, the role of resting heart rate as a risk marker for death and other adverse outcomes was further examined in a number of different patient populations. A systematic review of studies that previously assessed the prognostic value of resting heart rate for mortality and other adverse cardiovascular outcomes was presented. New analyses of nine clinical trials were carried out. Both the original and extended Cox model that allows for analysis of time-dependent covariates were used to evaluate and compare the predictive value of baseline and time-updated heart rate measurements for adverse outcomes in the CAPRICORN, EUROPA, PROSPER, PERFORM, BEAUTIFUL and SHIFT populations. Pooled individual patient meta-analyses of the CAPRICORN, EPHESUS, OPTIMAAL and VALIANT trials, and the BEAUTIFUL and SHIFT trials, were also performed. The discrimination and calibration of the models applied were evaluated using Harrell’s C-statistic and likelihood ratio tests, respectively. Finally, following on from the systematic review, meta-analyses of the relation between baseline and time-updated heart rate, and the risk of death from any cause and from cardiovascular causes, were conducted. Both elevated baseline and time-updated resting heart rates were found to be associated with an increase in the risk of mortality and other adverse cardiovascular events in all of the populations analysed. In some cases, elevated time-updated heart rate was associated with risk of events where baseline heart rate was not. Time-updated heart rate also contributed additional information about the risk of certain events despite knowledge of baseline heart rate or previous heart rate measurements. The addition of resting heart rate to the models where resting heart rate was found to be associated with risk of outcome improved both discrimination and calibration, and in general, the models including time-updated heart rate along with baseline or the previous heart rate measurement had the highest and similar C-statistics, and thus the greatest discriminative ability. The meta-analyses demonstrated that a 5bpm higher baseline heart rate was associated with a 7.9% and an 8.0% increase in the risk of all-cause and cardiovascular death, respectively (both p less than 0.001). Additionally, a 5bpm higher time-updated heart rate (adjusted for baseline heart rate in eight of the ten studies included in the analyses) was associated with a 12.8% (p less than 0.001) and a 10.9% (p less than 0.001) increase in the risk of all-cause and cardiovascular death, respectively. These findings may motivate health care professionals to routinely assess resting heart rate in order to identify individuals at a higher risk of adverse events. The fact that the addition of time-updated resting heart rate improved the discrimination and calibration of models for certain outcomes, even if only modestly, strengthens the case that it be added to traditional risk models. The findings, however, are of particular importance, and have greater implications for the clinical management of patients with pre-existing disease. An elevated, or increasing heart rate over time could be used as a tool, potentially alongside other established risk scores, to help doctors identify patient deterioration or those at higher risk, who might benefit from more intensive monitoring or treatment re-evaluation. Further exploration of the role of continuous recording of resting heart rate, say, when patients are at home, would be informative. In addition, investigation into the cost-effectiveness and optimal frequency of resting heart rate measurement is required. One of the most vital areas for future research is the definition of an objective cut-off value for the definition of a high resting heart rate.
Resumo:
Understanding the natural and forced variability of the atmospheric general circulation and its drivers is one of the grand challenges in climate science. It is of paramount importance to understand to what extent the systematic error of climate models affects the processes driving such variability. This is done by performing a set of simulations (ROCK experiments) with an intermediate complexity atmospheric model (SPEEDY), in which the Rocky Mountains orography is increased or decreased to influence the structure of the North Pacific jet stream. For each of these modified-orography experiments, the climatic response to idealized sea surface temperature anomalies of varying intensity in the El Niño Southern Oscillation (ENSO) region is studied. ROCK experiments are characterized by variations in the Pacific jet stream intensity whose extension encompasses the spread of the systematic error found in Coupled Model Intercomparison Project (CMIP6) models. When forced with ENSO-like idealised anomalies, they exhibit a non-negligible sensitivity in the response pattern over the Pacific North American region, indicating that the model mean state can affect the model response to ENSO. It is found that the classical Rossby wave train response to ENSO is more meridionally oriented when the Pacific jet stream is weaker and more zonally oriented with a stronger jet. Rossby wave linear theory suggests that a stronger jet implies a stronger waveguide, which traps Rossby waves at a lower latitude, favouring a zonal propagation of Rossby waves. The shape of the dynamical response to ENSO affects the ENSO impacts on surface temperature and precipitation over Central and North America. A comparison of the SPEEDY results with CMIP6 models suggests a wider applicability of the results to more resources-demanding climate general circulation models (GCMs), opening up to future works focusing on the relationship between Pacific jet misrepresentation and response to external forcing in fully-fledged GCMs.
Resumo:
Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.
Resumo:
During the last few years, a great deal of interest has risen concerning the applications of stochastic methods to several biochemical and biological phenomena. Phenomena like gene expression, cellular memory, bet-hedging strategy in bacterial growth and many others, cannot be described by continuous stochastic models due to their intrinsic discreteness and randomness. In this thesis I have used the Chemical Master Equation (CME) technique to modelize some feedback cycles and analyzing their properties, including experimental data. In the first part of this work, the effect of stochastic stability is discussed on a toy model of the genetic switch that triggers the cellular division, which malfunctioning is known to be one of the hallmarks of cancer. The second system I have worked on is the so-called futile cycle, a closed cycle of two enzymatic reactions that adds and removes a chemical compound, called phosphate group, to a specific substrate. I have thus investigated how adding noise to the enzyme (that is usually in the order of few hundred molecules) modifies the probability of observing a specific number of phosphorylated substrate molecules, and confirmed theoretical predictions with numerical simulations. In the third part the results of the study of a chain of multiple phosphorylation-dephosphorylation cycles will be presented. We will discuss an approximation method for the exact solution in the bidimensional case and the relationship that this method has with the thermodynamic properties of the system, which is an open system far from equilibrium.In the last section the agreement between the theoretical prediction of the total protein quantity in a mouse cells population and the observed quantity will be shown, measured via fluorescence microscopy.
Resumo:
This paper has three primary aims: to establish an effective means for modelling mainland-island metapopulations inhabiting a dynamic landscape: to investigate the effect of immigration and dynamic changes in habitat on metapopulation patch occupancy dynamics; and to illustrate the implications of our results for decision-making and population management. We first extend the mainland-island metapopulation model of Alonso and McKane [Bull. Math. Biol. 64:913-958,2002] to incorporate a dynamic landscape. It is shown, for both the static and the dynamic landscape models, that a suitably scaled version of the process converges to a unique deterministic model as the size of the system becomes large. We also establish that. under quite general conditions, the density of occupied patches, and the densities of suitable and occupied patches, for the respective models, have approximate normal distributions. Our results not only provide us with estimates for the means and variances that are valid at all stages in the evolution of the population, but also provide a tool for fitting the models to real metapopulations. We discuss the effect of immigration and habitat dynamics on metapopulations, showing that mainland-like patches heavily influence metapopulation persistence, and we argue for adopting measures to increase connectivity between this large patch and the other island-like patches. We illustrate our results with specific reference to examples of populations of butterfly and the grasshopper Bryodema tuberculata.
Resumo:
We see that the price of an european call option in a stochastic volatilityframework can be decomposed in the sum of four terms, which identifythe main features of the market that affect to option prices: the expectedfuture volatility, the correlation between the volatility and the noisedriving the stock prices, the market price of volatility risk and thedifference of the expected future volatility at different times. We alsostudy some applications of this decomposition.
Resumo:
This paper considers the general problem of Feasible Generalized Least Squares Instrumental Variables (FG LS IV) estimation using optimal instruments. First we summarize the sufficient conditions for the FG LS IV estimator to be asymptotic ally equivalent to an optimal G LS IV estimator. Then we specialize to stationary dynamic systems with stationary VAR errors, and use the sufficient conditions to derive new moment conditions for these models. These moment conditions produce useful IVs from the lagged endogenous variables, despite the correlation between errors and endogenous variables. This use of the information contained in the lagged endogenous variables expands the class of IV estimators under consideration and there by potentially improves both asymptotic and small-sample efficiency of the optimal IV estimator in the class. Some Monte Carlo experiments compare the new methods with those of Hatanaka [1976]. For the DG P used in the Monte Carlo experiments, asymptotic efficiency is strictly improved by the new IVs, and experimental small-sample efficiency is improved as well.