971 resultados para Panel VAR models
Resumo:
Three simple climate models (SCMs) are calibrated using simulations from atmosphere ocean general circulation models (AOGCMs). In addition to using two conventional SCMs, results from a third simpler model developed specifically for this study are obtained. An easy to implement and comprehensive iterative procedure is applied that optimises the SCM emulation of global-mean surface temperature and total ocean heat content, and, if available in the SCM, of surface temperature over land, over the ocean and in both hemispheres, and of the global-mean ocean temperature profile. The method gives best-fit estimates as well as uncertainty intervals for the different SCM parameters. For the calibration, AOGCM simulations with two different types of forcing scenarios are used: pulse forcing simulations performed with 2 AOGCMs and gradually changing forcing simulations from 15 AOGCMs obtained within the framework of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. The method is found to work well. For all possible combinations of SCMs and AOGCMs the emulation of AOGCM results could be improved. The obtained SCM parameters depend both on the AOGCM data and the type of forcing scenario. SCMs with a poor representation of the atmosphere thermal inertia are better able to emulate AOGCM results from gradually changing forcing than from pulse forcing simulations. Correct simultaneous emulation of both atmospheric temperatures and the ocean temperature profile by the SCMs strongly depends on the representation of the temperature gradient between the atmosphere and the mixed layer. Introducing climate sensitivities that are dependent on the forcing mechanism in the SCMs allows the emulation of AOGCM responses to carbon dioxide and solar insolation forcings equally well. Also, some SCM parameters are found to be very insensitive to the fitting, and the reduction of their uncertainty through the fitting procedure is only marginal, while other parameters change considerably. The very simple SCM is found to reproduce the AOGCM results as well as the other two comparably more sophisticated SCMs.
Resumo:
SCIENTIFIC SUMMARY Globally averaged total column ozone has declined over recent decades due to the release of ozone-depleting substances (ODSs) into the atmosphere. Now, as a result of the Montreal Protocol, ozone is expected to recover from the effects of ODSs as ODS abundances decline in the coming decades. However, a number of factors in addition to ODSs have led to and will continue to lead to changes in ozone. Discriminating between the causes of past and projected ozone changes is necessary, not only to identify the progress in ozone recovery from ODSs, but also to evaluate the effectiveness of climate and ozone protection policy options. Factors Affecting Future Ozone and Surface Ultraviolet Radiation • At least for the next few decades, the decline of ODSs is expected to be the major factor affecting the anticipated increase in global total column ozone. However, several factors other than ODS will affect the future evolution of ozone in the stratosphere. These include changes in (i) stratospheric circulation and temperature due to changes in long-lived greenhouse gas (GHG) abundances, (ii) stratospheric aerosol loading, and (iii) source gases of highly reactive stratospheric hydrogen and nitrogen compounds. Factors that amplify the effects of ODSs on ozone (e.g., stratospheric aerosols) will likely decline in importance as ODSs are gradually eliminated from the atmosphere. • Increases in GHG emissions can both positively and negatively affect ozone. Carbon dioxide (CO2)-induced stratospheric cooling elevates middle and upper stratospheric ozone and decreases the time taken for ozone to return to 1980 levels, while projected GHG-induced increases in tropical upwelling decrease ozone in the tropical lower stratosphere and increase ozone in the extratropics. Increases in nitrous oxide (N2O) and methane (CH4) concentrations also directly impact ozone chemistry but the effects are different in different regions. • The Brewer-Dobson circulation (BDC) is projected to strengthen over the 21st century and thereby affect ozone amounts. Climate models consistently predict an acceleration of the BDC or, more specifically, of the upwelling mass flux in the tropical lower stratosphere of around 2% per decade as a consequence of GHG abundance increases. A stronger BDC would decrease the abundance of tropical lower stratospheric ozone, increase poleward transport of ozone, and could reduce the atmospheric lifetimes of long-lived ODSs and other trace gases. While simulations showing faster ascent in the tropical lower stratosphere to date are a robust feature of chemistry-climate models (CCMs), this has not been confirmed by observations and the responsible mechanisms remain unclear. • Substantial ozone losses could occur if stratospheric aerosol loading were to increase in the next few decades, while halogen levels are high. Stratospheric aerosol increases may be caused by sulfur contained in volcanic plumes entering the stratosphere or from human activities. The latter might include attempts to geoengineer the climate system by enhancing the stratospheric aerosol layer. The ozone losses mostly result from enhanced heterogeneous chemistry on stratospheric aerosols. Enhanced aerosol heating within the stratosphere also leads to changes in temperature and circulation that affect ozone. • Surface ultraviolet (UV) levels will not be affected solely by ozone changes but also by the effects of climate change and by air quality change in the troposphere. These tropospheric effects include changes in clouds, tropospheric aerosols, surface reflectivity, and tropospheric sulfur dioxide (SO2) and nitrogen dioxide (NO2). The uncertainties in projections of these factors are large. Projected increases in tropospheric ozone are more certain and may lead to reductions in surface erythemal (“sunburning”) irradiance of up to 10% by 2100. Changes in clouds may lead to decreases or increases in surface erythemal irradiance of up to 15% depending on latitude. Expected Future Changes in Ozone Full ozone recovery from the effects of ODSs and return of ozone to historical levels are not synonymous. In this chapter a key target date is chosen to be 1980, in part to retain the connection to previous Ozone Assessments. Noting, however, that decreases in ozone may have occurred in some regions of the atmosphere prior to 1980, 1960 return dates are also reported. The projections reported on in this chapter are taken from a recent compilation of CCM simulations. The ozone projections, which also form the basis for the UV projections, are limited in their representativeness of possible futures since they mostly come from CCM simulations based on a single GHG emissions scenario (scenario A1B of Emissions Scenarios. A Special Report of Working Group III of the Intergovernmental Panel on Climate Change, Cambridge University Press, 2000) and a single ODS emissions scenario (adjusted A1 of the previous (2006) Ozone Assessment). Throughout this century, the vertical, latitudinal, and seasonal structure of the ozone distribution will be different from what it was in 1980. For this reason, ozone changes in different regions of the atmosphere are considered separately. • The projections of changes in ozone and surface clear-sky UV are broadly consistent with those reported on in the 2006 Assessment. • The capability of making projections and attribution of future ozone changes has been improved since the 2006 Assessment. Use of CCM simulations from an increased number of models extending through the entire period of ozone depletion and recovery from ODSs (1960–2100) as well as sensitivity simulations have allowed more robust projections of long-term changes in the stratosphere and of the relative contributions of ODSs and GHGs to those changes. • Global annually averaged total column ozone is projected to return to 1980 levels before the middle of the century and earlier than when stratospheric halogen loading returns to 1980 levels. CCM projections suggest that this early return is primarily a result of GHG-induced cooling of the upper stratosphere because the effects of circulation changes on tropical and extratropical ozone largely cancel. Global (90°S–90°N) annually averaged total column ozone will likely return to 1980 levels between 2025 and 2040, well before the return of stratospheric halogens to 1980 levels between 2045 and 2060. • Simulated changes in tropical total column ozone from 1960 to 2100 are generally small. The evolution of tropical total column ozone in models depends on the balance between upper stratospheric increases and lower stratospheric decreases. The upper stratospheric increases result from declining ODSs and a slowing of ozone destruction resulting from GHG-induced cooling. Ozone decreases in the lower stratosphere mainly result from an increase in tropical upwelling. From 1960 until around 2000, a general decline is simulated, followed by a gradual increase to values typical of 1980 by midcentury. Thereafter, although total column ozone amounts decline slightly again toward the end of the century, by 2080 they are no longer expected to be affected by ODSs. Confidence in tropical ozone projections is compromised by the fact that simulated decreases in column ozone to date are not supported by observations, suggesting that significant uncertainties remain. • Midlatitude total column ozone is simulated to evolve differently in the two hemispheres. Over northern midlatitudes, annually averaged total column ozone is projected to return to 1980 values between 2015 and 2030, while for southern midlatitudes the return to 1980 values is projected to occur between 2030 and 2040. The more rapid return to 1980 values in northern midlatitudes is linked to a more pronounced strengthening of the poleward transport of ozone due to the effects of increased GHG levels, and effects of Antarctic ozone depletion on southern midlatitudes. By 2100, midlatitude total column ozone is projected to be above 1980 values in both hemispheres. • October-mean Antarctic total column ozone is projected to return to 1980 levels after midcentury, later than in any other region, and yet earlier than when stratospheric halogen loading is projected to return to 1980 levels. The slightly earlier return of ozone to 1980 levels (2045–2060) results primarily from upper stratospheric cooling and resultant increases in ozone. The return of polar halogen loading to 1980 levels (2050–2070) in CCMs is earlier than in empirical models that exclude the effects of GHG-induced changes in circulation. Our confidence in the drivers of changes in Antarctic ozone is higher than for other regions because (i) ODSs exert a strong influence on Antarctic ozone, (ii) the effects of changes in GHG abundances are comparatively small, and (iii) projections of ODS emissions are more certain than those for GHGs. Small Antarctic ozone holes (areas of ozone <220 Dobson units, DU) could persist to the end of the 21st century. • March-mean Arctic total column ozone is projected to return to 1980 levels two to three decades before polar halogen loading returns to 1980 levels, and to exceed 1980 levels thereafter. While CCM simulations project a return to 1980 levels between 2020 and 2035, most models tend not to capture observed low temperatures and thus underestimate present-day Arctic ozone loss such that it is possible that this return date is biased early. Since the strengthening of the Brewer-Dobson circulation through the 21st century leads to increases in springtime Arctic column ozone, by 2100 Arctic ozone is projected to lie well above 1960 levels. Uncertainties in Projections • Conclusions dependent on future GHG levels are less certain than those dependent on future ODS levels since ODS emissions are controlled by the Montreal Protocol. For the six GHG scenarios considered by a few CCMs, the simulated differences in stratospheric column ozone over the second half of the 21st century are largest in the northern midlatitudes and the Arctic, with maximum differences of 20–40 DU between the six scenarios in 2100. • There remain sources of uncertainty in the CCM simulations. These include the use of prescribed ODS mixing ratios instead of emission fluxes as lower boundary conditions, the range of sea surface temperatures and sea ice concentrations, missing tropospheric chemistry, model parameterizations, and model climate sensitivity. • Geoengineering schemes for mitigating climate change by continuous injections of sulfur-containing compounds into the stratosphere, if implemented, would substantially affect stratospheric ozone, particularly in polar regions. Ozone losses observed following large volcanic eruptions support this prediction. However, sporadic volcanic eruptions provide limited analogs to the effects of continuous sulfur emissions. Preliminary model simulations reveal large uncertainties in assessing the effects of continuous sulfur injections. Expected Future Changes in Surface UV. While a number of factors, in addition to ozone, affect surface UV irradiance, the focus in this chapter is on the effects of changes in stratospheric ozone on surface UV. For this reason, clear-sky surface UV irradiance is calculated from ozone projections from CCMs. • Projected increases in midlatitude ozone abundances during the 21st century, in the absence of changes in other factors, in particular clouds, tropospheric aerosols, and air pollutants, will result in decreases in surface UV irradiance. Clear-sky erythemal irradiance is projected to return to 1980 levels on average in 2025 for the northern midlatitudes, and in 2035 for the southern midlatitudes, and to fall well below 1980 values by the second half of the century. However, actual changes in surface UV will be affected by a number of factors other than ozone. • In the absence of changes in other factors, changes in tropical surface UV will be small because changes in tropical total column ozone are projected to be small. By the middle of the 21st century, the model projections suggest surface UV to be slightly higher than in the 1960s, very close to values in 1980, and slightly lower than in 2000. The projected decrease in tropical total column ozone through the latter half of the century will likely result in clear-sky surface UV remaining above 1960 levels. Average UV irradiance is already high in the tropics due to naturally occurring low total ozone columns and high solar elevations. • The magnitude of UV changes in the polar regions is larger than elsewhere because ozone changes in polar regions are larger. For the next decades, surface clear-sky UV irradiance, particularly in the Antarctic, will continue to be higher than in 1980. Future increases in ozone and decreases in clear-sky UV will occur at slower rates than those associated with the ozone decreases and UV increases that occurred before 2000. In Antarctica, surface clear-sky UV is projected to return to 1980 levels between 2040 and 2060, while in the Arctic this is projected to occur between 2020 and 2030. By 2100, October surface clear-sky erythemal irradiance in Antarctica is likely to be between 5% below to 25% above 1960 levels, with considerable uncertainty. This is consistent with multi-model-mean October Antarctic total column ozone not returning to 1960 levels by 2100. In contrast, by 2100, surface clear-sky UV in the Arctic is projected to be 0–10% below 1960 levels.
Resumo:
Four-dimensional variational data assimilation (4D-Var) is used in environmental prediction to estimate the state of a system from measurements. When 4D-Var is applied in the context of high resolution nested models, problems may arise in the representation of spatial scales longer than the domain of the model. In this paper we study how well 4D-Var is able to estimate the whole range of spatial scales present in one-way nested models. Using a model of the one-dimensional advection–diffusion equation we show that small spatial scales that are observed can be captured by a 4D-Var assimilation, but that information in the larger scales may be degraded. We propose a modification to 4D-Var which allows a better representation of these larger scales.
Resumo:
Models often underestimate blocking in the Atlantic and Pacific basins and this can lead to errors in both weather and climate predictions. Horizontal resolution is often cited as the main culprit for blocking errors due to poorly resolved small-scale variability, the upscale effects of which help to maintain blocks. Although these processes are important for blocking, the authors show that much of the blocking error diagnosed using common methods of analysis and current climate models is directly attributable to the climatological bias of the model. This explains a large proportion of diagnosed blocking error in models used in the recent Intergovernmental Panel for Climate Change report. Furthermore, greatly improved statistics are obtained by diagnosing blocking using climate model data corrected to account for mean model biases. To the extent that mean biases may be corrected in low-resolution models, this suggests that such models may be able to generate greatly improved levels of atmospheric blocking.
Resumo:
The performance of various statistical models and commonly used financial indicators for forecasting securitised real estate returns are examined for five European countries: the UK, Belgium, the Netherlands, France and Italy. Within a VAR framework, it is demonstrated that the gilt-equity yield ratio is in most cases a better predictor of securitized returns than the term structure or the dividend yield. In particular, investors should consider in their real estate return models the predictability of the gilt-equity yield ratio in Belgium, the Netherlands and France, and the term structure of interest rates in France. Predictions obtained from the VAR and univariate time-series models are compared with the predictions of an artificial neural network model. It is found that, whilst no single model is universally superior across all series, accuracy measures and horizons considered, the neural network model is generally able to offer the most accurate predictions for 1-month horizons. For quarterly and half-yearly forecasts, the random walk with a drift is the most successful for the UK, Belgian and Dutch returns and the neural network for French and Italian returns. Although this study underscores market context and forecast horizon as parameters relevant to the choice of the forecast model, it strongly indicates that analysts should exploit the potential of neural networks and assess more fully their forecast performance against more traditional models.
Resumo:
Climate modeling is a complex process, requiring accurate and complete metadata in order to identify, assess and use climate data stored in digital repositories. The preservation of such data is increasingly important given the development of ever-increasingly complex models to predict the effects of global climate change. The EU METAFOR project has developed a Common Information Model (CIM) to describe climate data and the models and modelling environments that produce this data. There is a wide degree of variability between different climate models and modelling groups. To accommodate this, the CIM has been designed to be highly generic and flexible, with extensibility built in. METAFOR describes the climate modelling process simply as "an activity undertaken using software on computers to produce data." This process has been described as separate UML packages (and, ultimately, XML schemas). This fairly generic structure canbe paired with more specific "controlled vocabularies" in order to restrict the range of valid CIM instances. The CIM will aid digital preservation of climate models as it will provide an accepted standard structure for the model metadata. Tools to write and manage CIM instances, and to allow convenient and powerful searches of CIM databases,. Are also under development. Community buy-in of the CIM has been achieved through a continual process of consultation with the climate modelling community, and through the METAFOR team’s development of a questionnaire that will be used to collect the metadata for the Intergovernmental Panel on Climate Change’s (IPCC) Coupled Model Intercomparison Project Phase 5 (CMIP5) model runs.
Resumo:
Rat ileal air interface and submerged explant models were developed and used to compare the adhesion of Salmonella enterica var Enteritidis wild-type strains with that of their isogenic single and multiple deletion mutants. The modified strains studied were defective for fimbriae, flagella, motility or chemotaxis and binding was assessed on tissues with and without an intact mucus layer. A multiple afimbriate/aflagellate (fim(-)/fla(-)) strain, a fimbriate but aflagellate (fla(-)) strain and a fimbriate/flagellate but non-motile (mot(-)) strain bound significantly less extensively to the explants than the corresponding wild-type strains. With the submerged explant model this difference was evident in tissues with or without a mucus layer, whereas in the air interface model it was observed only in tissues,vith an intact mucus layer. A smooth swimming chemotaxis-defective (che(-)) strain and single or multiple afimbriate strains bound to explants as well as their corresponding wild-type strain. This suggests that under the present experimental conditions fimbriae were not essential for attachment of S. enterica var Enteritidis to rat ileal explants, However; the possession of active flagella did appear to be an important factor. in enabling salmonellae to penetrate the gastrointestinal mucus layer and attach specifically to epithelial cells.
Resumo:
Both historical and idealized climate model experiments are performed with a variety of Earth system models of intermediate complexity (EMICs) as part of a community contribution to the Intergovernmental Panel on Climate Change Fifth Assessment Report. Historical simulations start at 850 CE and continue through to 2005. The standard simulations include changes in forcing from solar luminosity, Earth's orbital configuration, CO2, additional greenhouse gases, land use, and sulphate and volcanic aerosols. In spite of very different modelled pre-industrial global surface air temperatures, overall 20th century trends in surface air temperature and carbon uptake are reasonably well simulated when compared to observed trends. Land carbon fluxes show much more variation between models than ocean carbon fluxes, and recent land fluxes appear to be slightly underestimated. It is possible that recent modelled climate trends or climate–carbon feedbacks are overestimated resulting in too much land carbon loss or that carbon uptake due to CO2 and/or nitrogen fertilization is underestimated. Several one thousand year long, idealized, 2 × and 4 × CO2 experiments are used to quantify standard model characteristics, including transient and equilibrium climate sensitivities, and climate–carbon feedbacks. The values from EMICs generally fall within the range given by general circulation models. Seven additional historical simulations, each including a single specified forcing, are used to assess the contributions of different climate forcings to the overall climate and carbon cycle response. The response of surface air temperature is the linear sum of the individual forcings, while the carbon cycle response shows a non-linear interaction between land-use change and CO2 forcings for some models. Finally, the preindustrial portions of the last millennium simulations are used to assess historical model carbon-climate feedbacks. Given the specified forcing, there is a tendency for the EMICs to underestimate the drop in surface air temperature and CO2 between the Medieval Climate Anomaly and the Little Ice Age estimated from palaeoclimate reconstructions. This in turn could be a result of unforced variability within the climate system, uncertainty in the reconstructions of temperature and CO2, errors in the reconstructions of forcing used to drive the models, or the incomplete representation of certain processes within the models. Given the forcing datasets used in this study, the models calculate significant land-use emissions over the pre-industrial period. This implies that land-use emissions might need to be taken into account, when making estimates of climate–carbon feedbacks from palaeoclimate reconstructions.
Resumo:
Operational forecasting centres are currently developing data assimilation systems for coupled atmosphere-ocean models. Strongly coupled assimilation, in which a single assimilation system is applied to a coupled model, presents significant technical and scientific challenges. Hence weakly coupled assimilation systems are being developed as a first step, in which the coupled model is used to compare the current state estimate with observations, but corrections to the atmosphere and ocean initial conditions are then calculated independently. In this paper we provide a comprehensive description of the different coupled assimilation methodologies in the context of four dimensional variational assimilation (4D-Var) and use an idealised framework to assess the expected benefits of moving towards coupled data assimilation. We implement an incremental 4D-Var system within an idealised single column atmosphere-ocean model. The system has the capability to run both strongly and weakly coupled assimilations as well as uncoupled atmosphere or ocean only assimilations, thus allowing a systematic comparison of the different strategies for treating the coupled data assimilation problem. We present results from a series of identical twin experiments devised to investigate the behaviour and sensitivities of the different approaches. Overall, our study demonstrates the potential benefits that may be expected from coupled data assimilation. When compared to uncoupled initialisation, coupled assimilation is able to produce more balanced initial analysis fields, thus reducing initialisation shock and its impact on the subsequent forecast. Single observation experiments demonstrate how coupled assimilation systems are able to pass information between the atmosphere and ocean and therefore use near-surface data to greater effect. We show that much of this benefit may also be gained from a weakly coupled assimilation system, but that this can be sensitive to the parameters used in the assimilation.
Resumo:
Atmosphere only and ocean only variational data assimilation (DA) schemes are able to use window lengths that are optimal for the error growth rate, non-linearity and observation density of the respective systems. Typical window lengths are 6-12 hours for the atmosphere and 2-10 days for the ocean. However, in the implementation of coupled DA schemes it has been necessary to match the window length of the ocean to that of the atmosphere, which may potentially sacrifice the accuracy of the ocean analysis in order to provide a more balanced coupled state. This paper investigates how extending the window length in the presence of model error affects both the analysis of the coupled state and the initialized forecast when using coupled DA with differing degrees of coupling. Results are illustrated using an idealized single column model of the coupled atmosphere-ocean system. It is found that the analysis error from an uncoupled DA scheme can be smaller than that from a coupled analysis at the initial time, due to faster error growth in the coupled system. However, this does not necessarily lead to a more accurate forecast due to imbalances in the coupled state. Instead coupled DA is more able to update the initial state to reduce the impact of the model error on the accuracy of the forecast. The effect of model error is potentially most detrimental in the weakly coupled formulation due to the inconsistency between the coupled model used in the outer loop and uncoupled models used in the inner loop.
Resumo:
The study aims to assess the empirical adherence of the permanent income theory and the consumption smoothing view in Latin America. Two present value models are considered, one describing household behavior and the other open economy macroeconomics. Following the methodology developed in Campbell and Schiller (1987), Bivariate Vector Autoregressions are estimated for the saving ratio and the real growth rate of income concerning the household behavior model and for the current account and the change in national cash ‡ow regarding the open economy model. The countries in the sample are considered separately in the estimation process (individual system estimation) as well as jointly (joint system estimation). Ordinary Least Squares (OLS) and Seemingly Unrelated Regressions (SURE) estimates of the coe¢cients are generated. Wald Tests are then conducted to verify if the VAR coe¢cient estimates are in conformity with those predicted by the theory. While the empirical results are sensitive to the estimation method and discount factors used, there is only weak evidence in favor of the permanent income theory and consumption smoothing view in the group of countries analyzed.
Resumo:
Despite the commonly held belief that aggregate data display short-run comovement, there has been little discussion about the econometric consequences of this feature of the data. We use exhaustive Monte-Carlo simulations to investigate the importance of restrictions implied by common-cyclical features for estimates and forecasts based on vector autoregressive models. First, we show that the ìbestî empirical model developed without common cycle restrictions need not nest the ìbestî model developed with those restrictions. This is due to possible differences in the lag-lengths chosen by model selection criteria for the two alternative models. Second, we show that the costs of ignoring common cyclical features in vector autoregressive modelling can be high, both in terms of forecast accuracy and efficient estimation of variance decomposition coefficients. Third, we find that the Hannan-Quinn criterion performs best among model selection criteria in simultaneously selecting the lag-length and rank of vector autoregressions.
Resumo:
Despite the belief, supported byrecentapplied research, thataggregate datadisplay short-run comovement, there has been little discussion about the econometric consequences ofthese data “features.” W e use exhaustive M onte-Carlo simulations toinvestigate theimportance ofrestrictions implied by common-cyclicalfeatures for estimates and forecasts based on vectorautoregressive and errorcorrection models. First, weshowthatthe“best” empiricalmodeldevelopedwithoutcommoncycles restrictions neednotnestthe“best” modeldevelopedwiththoserestrictions, duetothe use ofinformation criteria forchoosingthe lagorderofthe twoalternative models. Second, weshowthatthecosts ofignoringcommon-cyclicalfeatures inV A R analysis may be high in terms offorecastingaccuracy and e¢ciency ofestimates ofvariance decomposition coe¢cients. A lthough these costs are more pronounced when the lag orderofV A R modelsareknown, theyarealsonon-trivialwhenitis selectedusingthe conventionaltoolsavailabletoappliedresearchers. T hird, we…ndthatifthedatahave common-cyclicalfeatures andtheresearcherwants touseaninformationcriterium to selectthelaglength, theH annan-Q uinn criterium is themostappropriate, sincethe A kaike and theSchwarz criteriahave atendency toover- and under-predictthe lag lengthrespectivelyinoursimulations.
Resumo:
This paper considers the general problem of Feasible Generalized Least Squares Instrumental Variables (FG LS IV) estimation using optimal instruments. First we summarize the sufficient conditions for the FG LS IV estimator to be asymptotic ally equivalent to an optimal G LS IV estimator. Then we specialize to stationary dynamic systems with stationary VAR errors, and use the sufficient conditions to derive new moment conditions for these models. These moment conditions produce useful IVs from the lagged endogenous variables, despite the correlation between errors and endogenous variables. This use of the information contained in the lagged endogenous variables expands the class of IV estimators under consideration and there by potentially improves both asymptotic and small-sample efficiency of the optimal IV estimator in the class. Some Monte Carlo experiments compare the new methods with those of Hatanaka [1976]. For the DG P used in the Monte Carlo experiments, asymptotic efficiency is strictly improved by the new IVs, and experimental small-sample efficiency is improved as well.