995 resultados para Van Schaack, Peter, 1747-1832.
Resumo:
The question is addressed whether using unbalanced updates in ocean-data assimilation schemes for seasonal forecasting systems can result in a relatively poor simulation of zonal currents. An assimilation scheme, where temperature observations are used for updating only the density field, is compared to a scheme where updates of density field and zonal velocities are related by geostrophic balance. This is done for an equatorial linear shallow-water model. It is found that equatorial zonal velocities can be detoriated if velocity is not updated in the assimilation procedure. Adding balanced updates to the zonal velocity is shown to be a simple remedy for the shallow-water model. Next, optimal interpolation (OI) schemes with balanced updates of the zonal velocity are implemented in two ocean general circulation models. First tests indicate a beneficial impact on equatorial upper-ocean zonal currents.
Resumo:
A smoother introduced earlier by van Leeuwen and Evensen is applied to a problem in which real obser vations are used in an area with strongly nonlinear dynamics. The derivation is new , but it resembles an earlier derivation by van Leeuwen and Evensen. Again a Bayesian view is taken in which the prior probability density of the model and the probability density of the obser vations are combined to for m a posterior density . The mean and the covariance of this density give the variance-minimizing model evolution and its errors. The assumption is made that the prior probability density is a Gaussian, leading to a linear update equation. Critical evaluation shows when the assumption is justified. This also sheds light on why Kalman filters, in which the same ap- proximation is made, work for nonlinear models. By reference to the derivation, the impact of model and obser vational biases on the equations is discussed, and it is shown that Bayes’ s for mulation can still be used. A practical advantage of the ensemble smoother is that no adjoint equations have to be integrated and that error estimates are easily obtained. The present application shows that for process studies a smoother will give superior results compared to a filter , not only owing to the smooth transitions at obser vation points, but also because the origin of features can be followed back in time. Also its preference over a strong-constraint method is highlighted. Further more, it is argued that the proposed smoother is more efficient than gradient descent methods or than the representer method when error estimates are taken into account
Resumo:
It is for mally proved that the general smoother for nonlinear dynamics can be for mulated as a sequential method, that is, obser vations can be assimilated sequentially during a for ward integration. The general filter can be derived from the smoother and it is shown that the general smoother and filter solutions at the final time become identical, as is expected from linear theor y. Then, a new smoother algorithm based on ensemble statistics is presented and examined in an example with the Lorenz equations. The new smoother can be computed as a sequential algorithm using only for ward-in-time model integrations. It bears a strong resemblance with the ensemble Kalman filter . The difference is that ever y time a new dataset is available during the for ward integration, an analysis is computed for all previous times up to this time. Thus, the first guess for the smoother is the ensemble Kalman filter solution, and the smoother estimate provides an improvement of this, as one would expect a smoother to do. The method is demonstrated in this paper in an intercomparison with the ensemble Kalman filter and the ensemble smoother introduced by van Leeuwen and Evensen, and it is shown to be superior in an application with the Lorenz equations. Finally , a discussion is given regarding the properties of the analysis schemes when strongly non-Gaussian distributions are used. It is shown that in these cases more sophisticated analysis schemes based on Bayesian statistics must be used.
Resumo:
Solitar y meanders of the Agulhas Current, so-called Natal pulses, may play an important role in the overall dynamics of this current system. Several hypotheses concer ning the triggering of these pulses are tested using sea sur face height and temperature data from satellites. The data show the for mation of pulses in the Natal Bight area at irregular inter vals ranging from 50 to 240 days. Moving downstream at speeds between 10 and 20 km day 2 1 they sometimes reach sizes of up to 300 km. They seem to play a role in the shedding of Agulhas rings that penetrate the South Atlantic. The inter mittent for mation of these solitar y meanders is argued to be most probably related to barotropic instability of the strongly baroclinic Agulhas Current in the Natal Bight. The vorticity structure of the obser ved basic flow is argued to be stable anywhere along its path. However , a proper perturbation of the jet in the Natal Bight area will allow barotropic instability , because the bottom slope there is considerably less steep than elsewhere along the South African east coast. Using satellite altimetr y these perturbations seem to be related to the inter mittent presence of offshore anticyclonic anomalies, both upstream and eastward of the Natal Bight.
Resumo:
The ther mohaline exchange between the Atlantic and the Souther n Ocean is analyzed, using a dataset based on WOCE hydrographic data. It is shown that the salt and heat transports brought about by the South Atlantic subtropical gyre play an essential role in the Atlantic heat and salt budgets. It is found that on average the exported North Atlantic Deep W ater (NADW) is fresher than the retur n flows (basically composed of ther mocline and inter mediate water), indicating that the overtur ning circulation (OC) exports freshwater from the Atlantic. The sensitivity of the OC to interbasin fluxes of heat and salt is studied in a 2 D model, representing the Atlantic between 60 8 N and 30 8 S. The model is forced by mixed boundar y conditions at the sur face, and by realistic fluxes of heat and salt at its 30 8 S boundar y. The model circulation tur ns out to be ver y sensitive to net buoyancy fluxes through the sur face. Both net sur face cooling and net sur face saltening are sources of potential energy and impact positively on the circulation strength. The vertical distributions of the lateral fluxes tend to stabilize the stratification, and, as they extract potential energy from the system, tend to weaken the flow . These results imply that a change in the composition of the NADW retur n transports, whether by a change in the ratio ther mocline/inter mediate water , o r by a change in their ther mohaline characteristics, might influence the Atlantic OC considerably . It is also shown that the circulation is much more sensitive to changes in the shape of the lateral buoyancy flux than to changes in the shape of the sur face buoyancy flux, as the latter does not explicitly impact on the potential energy of the system. It is concluded that interocean fluxes of heat and salt are important for the strength and operation of the Atlantic ther mohaline circulation, and should be correctly represented in models that are used for climate sensitivity studies.
Resumo:
This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter . I t i s shown that the obser vations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the obser vations and generate an ensemble of obser vations that then is used in updating the ensemble of model states. T raditionally , this has not been done in previous applications of the ensemble Kalman filter and, as will be shown, this has resulted in an updated ensemble with a variance that is too low . This simple modification of the analysis scheme results in a completely consistent approach if the covariance of the ensemble of model states is interpreted as the prediction error covariance, and there are no further requirements on the ensemble Kalman filter method, except for the use of an ensemble of sufficient size. Thus, there is a unique correspondence between the error statistics from the ensemble Kalman filter and the standard Kalman filter approach
Resumo:
The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.
Resumo:
The ring-shedding process in the Agulhas Current is studied using the ensemble Kalman filter to assimilate geosat altimeter data into a two-layer quasigeostrophic ocean model. The properties of the ensemble Kalman filter are further explored with focus on the analysis scheme and the use of gridded data. The Geosat data consist of 10 fields of gridded sea-surface height anomalies separated 10 days apart that are added to a climatic mean field. This corresponds to a huge number of data values, and a data reduction scheme must be applied to increase the efficiency of the analysis procedure. Further, it is illustrated how one can resolve the rank problem occurring when a too large dataset or a small ensemble is used.
Resumo:
Nonlinear data assimilation is high on the agenda in all fields of the geosciences as with ever increasing model resolution and inclusion of more physical (biological etc.) processes, and more complex observation operators the data-assimilation problem becomes more and more nonlinear. The suitability of particle filters to solve the nonlinear data assimilation problem in high-dimensional geophysical problems will be discussed. Several existing and new schemes will be presented and it is shown that at least one of them, the Equivalent-Weights Particle Filter, does indeed beat the curse of dimensionality and provides a way forward to solve the problem of nonlinear data assimilation in high-dimensional systems.
Resumo:
El Niño events are a prominent feature of climate variability with global climatic impacts. The 1997/98 episode, often referred to as ‘the climate event of the twentieth century’1, 2, and the 1982/83 extreme El Niño3, featured a pronounced eastward extension of the west Pacific warm pool and development of atmospheric convection, and hence a huge rainfall increase, in the usually cold and dry equatorial eastern Pacific. Such a massive reorganization of atmospheric convection, which we define as an extreme El Niño, severely disrupted global weather patterns, affecting ecosystems4, 5, agriculture6, tropical cyclones, drought, bushfires, floods and other extreme weather events worldwide3, 7, 8, 9. Potential future changes in such extreme El Niño occurrences could have profound socio-economic consequences. Here we present climate modelling evidence for a doubling in the occurrences in the future in response to greenhouse warming. We estimate the change by aggregating results from climate models in the Coupled Model Intercomparison Project phases 3 (CMIP3; ref. 10) and 5 (CMIP5; ref. 11) multi-model databases, and a perturbed physics ensemble12. The increased frequency arises from a projected surface warming over the eastern equatorial Pacific that occurs faster than in the surrounding ocean waters13, 14, facilitating more occurrences of atmospheric convection in the eastern equatorial region.
Resumo:
In numerical weather prediction, parameterisations are used to simulate missing physics in the model. These can be due to a lack of scientific understanding or a lack of computing power available to address all the known physical processes. Parameterisations are sources of large uncertainty in a model as parameter values used in these parameterisations cannot be measured directly and hence are often not well known; and the parameterisations themselves are also approximations of the processes present in the true atmosphere. Whilst there are many efficient and effective methods for combined state/parameter estimation in data assimilation (DA), such as state augmentation, these are not effective at estimating the structure of parameterisations. A new method of parameterisation estimation is proposed that uses sequential DA methods to estimate errors in the numerical models at each space-time point for each model equation. These errors are then fitted to pre-determined functional forms of missing physics or parameterisations that are based upon prior information. We applied the method to a one-dimensional advection model with additive model error, and it is shown that the method can accurately estimate parameterisations, with consistent error estimates. Furthermore, it is shown how the method depends on the quality of the DA results. The results indicate that this new method is a powerful tool in systematic model improvement.