857 resultados para Environments with time-varying ocean currents
Resumo:
The formation of sulfated zirconia films from a sol-gel derived aqueous suspension is subjected to double-optical monitoring during batch dip coating. Interpretation of interferometric patterns, previously obscured by a variable refractive index, is now made possible by addition of its direct measurement by a polarimetric technique in real time. Significant sensitivity of the resulting physical thickness and refractive index curves (uncertainties of ±7 nm and ±0.005, respectively) to temporal film evolution is shown under different withdrawal speeds. As a first contribution to quantitative understanding of temporal film formation with varying nanostructure during dip coating, detailed analysis is directed to the stage of the process dominated by mass drainage, whose simple modeling with temporal t-1/2 dependence is verified experimentally. © 2006 Elsevier B.V. All rights reserved.
Resumo:
In this paper we propose methods for smooth hazard estimation of a time variable where that variable is interval censored. These methods allow one to model the transformed hazard in terms of either smooth (smoothing splines) or linear functions of time and other relevant time varying predictor variables. We illustrate the use of this method on a dataset of hemophiliacs where the outcome, time to seroconversion for HIV, is interval censored and left-truncated.
Resumo:
In this paper we introduce technical efficiency via the intercept that evolve over time as a AR(1) process in a stochastic frontier (SF) framework in a panel data framework. Following are the distinguishing features of the model. First, the model is dynamic in nature. Second, it can separate technical inefficiency from fixed firm-specific effects which are not part of inefficiency. Third, the model allows one to estimate technical change separate from change in technical efficiency. We propose the ML method to estimate the parameters of the model. Finally, we derive expressions to calculate/predict technical inefficiency (efficiency).
Resumo:
Background: Several models have been designed to predict survival of patients with heart failure. These, while available and widely used for both stratifying and deciding upon different treatment options on the individual level, have several limitations. Specifically, some clinical variables that may influence prognosis may have an influence that change over time. Statistical models that include such characteristic may help in evaluating prognosis. The aim of the present study was to analyze and quantify the impact of modeling heart failure survival allowing for covariates with time-varying effects known to be independent predictors of overall mortality in this clinical setting. Methodology: Survival data from an inception cohort of five hundred patients diagnosed with heart failure functional class III and IV between 2002 and 2004 and followed-up to 2006 were analyzed by using the proportional hazards Cox model and variations of the Cox's model and also of the Aalen's additive model. Principal Findings: One-hundred and eighty eight (188) patients died during follow-up. For patients under study, age, serum sodium, hemoglobin, serum creatinine, and left ventricular ejection fraction were significantly associated with mortality. Evidence of time-varying effect was suggested for the last three. Both high hemoglobin and high LV ejection fraction were associated with a reduced risk of dying with a stronger initial effect. High creatinine, associated with an increased risk of dying, also presented an initial stronger effect. The impact of age and sodium were constant over time. Conclusions: The current study points to the importance of evaluating covariates with time-varying effects in heart failure models. The analysis performed suggests that variations of Cox and Aalen models constitute a valuable tool for identifying these variables. The implementation of covariates with time-varying effects into heart failure prognostication models may reduce bias and increase the specificity of such models.
Resumo:
There are both theoretical and empirical reasons for believing that the parameters of macroeconomic models may vary over time. However, work with time-varying parameter models has largely involved Vector autoregressions (VARs), ignoring cointegration. This is despite the fact that cointegration plays an important role in informing macroeconomists on a range of issues. In this paper we develop time varying parameter models which permit cointegration. Time-varying parameter VARs (TVP-VARs) typically use state space representations to model the evolution of parameters. In this paper, we show that it is not sensible to use straightforward extensions of TVP-VARs when allowing for cointegration. Instead we develop a specification which allows for the cointegrating space to evolve over time in a manner comparable to the random walk variation used with TVP-VARs. The properties of our approach are investigated before developing a method of posterior simulation. We use our methods in an empirical investigation involving a permanent/transitory variance decomposition for inflation.
Resumo:
In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time-varying parameter models. Our approach allows for the coefficient on each predictor to be: i) time varying, ii) constant over time or iii) shrunk to zero. The econometric methodology decides automatically which category each coefficient belongs in. Our empirical results indicate the benefits of such an approach.
Resumo:
This paper is a theoretica1 and empirica1 study of the re1ationship between indexing po1icy and feedback mechanisms in the inflationary adjustment process in Brazil. The focus of our study is on two policy issues: (1) did the Brazilian system of indexing of interest rates, the exchange rate, and wages make inflation so dependent on its own past values that it created a significant feedback process and inertia in the behaviour of inflation in and (2) was the feedback effect of past inf1ation upon itself so strong that dominated the effect of monetary/fiscal variables upon current inflation? This paper develops a simple model designed to capture several "stylized facts" of Brazi1ian indexing po1icy. Separate ru1es of "backward indexing" for interest rates, the exchange rate, and wages, reflecting the evolution of po1icy changes in Brazil, are incorporated in a two-sector model of industrial and agricultural prices. A transfer function derived irom this mode1 shows inflation depending on three factors: (1) past values of inflation, (2) monetary and fiscal variables, and (3) supply- .shock variables. The indexing rules for interest rates, the exchange rate, and wages place restrictions on the coefficients of the transfer function. Variations in the policy-determined parameters of the indexing rules imply changes in the coefficients of the transfer function for inflation. One implication of this model, in contrast to previous results derived in analytically simpler models of indexing, is that a higher degree of indexing does not make current inflation more responsive to current monetary shocks. The empirical section of this paper studies the central hypotheses of this model through estimation of the inflation transfer function with time-varying parameters. The results show a systematic non-random variation of the transfer function coefficients closely synchronized with changes in the observed values of the wage-indexing parameters. Non-parametric tests show the variation of the transfer function coefficients to be statistically significant at the time of the changes in wage indexing rules in Brazil. As the degree of indexing increased, the inflation feadback coefficients increased, while the effect of external price and agricultura shocs progressively increased and monetary effects progressively decreased.
Resumo:
In the last decade, distributed generation, with its various technologies, has increased its presence in the energy mix presenting distribution networks with challenges in terms of evaluating the technical impacts that require a wide range of network operational effects to be qualified and quantified. The inherent time-varying behavior of demand and distributed generation (particularly when renewable sources are used), need to be taken into account since considering critical scenarios of loading and generation may mask the impacts. One means of dealing with such complexity is through the use of indices that indicate the benefit or otherwise of connections at a given location and for a given horizon. This paper presents a multiobjective performance index for distribution networks with time-varying distributed generation which consider a number of technical issues. The approach has been applied to a medium voltage distribution network considering hourly demand and wind speeds. Results show that this proposal has a better response to the natural behavior of loads and generation than solely considering a single operation scenario.
Resumo:
The concepts of temperature and equilibrium are not well defined in systems of particles with time-varying external forces. An example is a radio frequency ion trap, with the ions laser cooled into an ordered solid, characteristic of sub-mK temperatures, whereas the kinetic energies associated with the fast coherent motion in the trap are up to 7 orders of magnitude higher. Simulations with 1,000 ions reach equilibrium between the degrees of freedom when only aperiodic displacements (secular motion) are considered. The coupling of the periodic driven motion associated with the confinement to the nonperiodic random motion of the ions is very small at low temperatures and increases quadratically with temperature.
Resumo:
When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.
Resumo:
In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finite-sample properties of the strategy and tests are examined by simulation. The modelling strategy is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns.
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method
Resumo:
This paper analyzes empirically the volatility of consumption-based stochastic discount factors as a measure of implicit economic fears by studying its relationship with future economic and stock market cycles. Time-varying economic fears seem to be well captured by the volatility of stochastic discount factors. In particular, the volatility of recursive utility-based stochastic discount factor with contemporaneous growth explains between 9 and 34 percent of future changes in industrial production at short and long horizons respectively. They also explain ex-ante uncertainty and risk aversion. However, future stock market cycles are better explained by a similar stochastic discount factor with long-run consumption growth. This specification of the stochastic discount factor presents higher volatility and lower pricing errors than the specification with contemporaneous consumption growth.
Resumo:
The problem of stability analysis for a class of neutral systems with mixed time-varying neutral, discrete and distributed delays and nonlinear parameter perturbations is addressed. By introducing a novel Lyapunov-Krasovskii functional and combining the descriptor model transformation, the Leibniz-Newton formula, some free-weighting matrices, and a suitable change of variables, new sufficient conditions are established for the stability of the considered system, which are neutral-delay-dependent, discrete-delay-range dependent, and distributeddelay-dependent. The conditions are presented in terms of linear matrix inequalities (LMIs) and can be efficiently solved using convex programming techniques. Two numerical examples are given to illustrate the efficiency of the proposed method