942 resultados para eddy covariance
Resumo:
A theory is presented for the adjustment of the Antarctic Circumpolar Current (ACC) and global pycnocline to a sudden and sustained change in wind forcing. The adjustment timescale is controlled by the mesoscale eddy diffusivity across the ACC, the mean width of the ACC, the surface area of the ocean basins to the north, and deep water formation in the North Atlantic. In particular, northern sinking may have the potential to shorten the timescale and reduce its sensitivity to Southern Ocean eddies, but the relative importance of northern sinking and Southern Ocean eddies cannot be determined precisely, largely due to limitations in the parameterization of northern sinking. Although it is clear that the main processes that control the adjustment timescale are those which counteract the deepening of the global pycnocline, the theory also suggests that the timescale can be subtly modified by wind forcing over the ACC and global diapycnal mixing. Results from calculations with a reduced-gravity model compare well with the theory. The multidecadal-centennial adjustment timescale implies that long observational time series will be required to detect dynamic change in the ACC due to anthropogenic forcing. The potential role of Southern Ocean mesoscale eddy activity in determining both the equilibrium state of the ACC and the timescale over which it adjusts suggests that the response to anthropogenic forcing may be different in coupled ocean-atmosphere climate models that parameterize and resolve mesoscale eddies.
Conditioning of incremental variational data assimilation, with application to the Met Office system
Resumo:
Implementations of incremental variational data assimilation require the iterative minimization of a series of linear least-squares cost functions. The accuracy and speed with which these linear minimization problems can be solved is determined by the condition number of the Hessian of the problem. In this study, we examine how different components of the assimilation system influence this condition number. Theoretical bounds on the condition number for a single parameter system are presented and used to predict how the condition number is affected by the observation distribution and accuracy and by the specified lengthscales in the background error covariance matrix. The theoretical results are verified in the Met Office variational data assimilation system, using both pseudo-observations and real data.
Resumo:
Modelling spatial covariance is an essential part of all geostatistical methods. Traditionally, parametric semivariogram models are fit from available data. More recently, it has been suggested to use nonparametric correlograms obtained from spatially complete data fields. Here, both estimation techniques are compared. Nonparametric correlograms are shown to have a substantial negative bias. Nonetheless, when combined with the sample variance of the spatial field under consideration, they yield an estimate of the semivariogram that is unbiased for small lag distances. This justifies the use of this estimation technique in geostatistical applications. Various formulations of geostatistical combination (Kriging) methods are used here for the construction of hourly precipitation grids for Switzerland based on data from a sparse realtime network of raingauges and from a spatially complete radar composite. Two variants of Ordinary Kriging (OK) are used to interpolate the sparse gauge observations. In both OK variants, the radar data are only used to determine the semivariogram model. One variant relies on a traditional parametric semivariogram estimate, whereas the other variant uses the nonparametric correlogram. The variants are tested for three cases and the impact of the semivariogram model on the Kriging prediction is illustrated. For the three test cases, the method using nonparametric correlograms performs equally well or better than the traditional method, and at the same time offers great practical advantages. Furthermore, two variants of Kriging with external drift (KED) are tested, both of which use the radar data to estimate nonparametric correlograms, and as the external drift variable. The first KED variant has been used previously for geostatistical radar-raingauge merging in Catalonia (Spain). The second variant is newly proposed here and is an extension of the first. Both variants are evaluated for the three test cases as well as an extended evaluation period. It is found that both methods yield merged fields of better quality than the original radar field or fields obtained by OK of gauge data. The newly suggested KED formulation is shown to be beneficial, in particular in mountainous regions where the quality of the Swiss radar composite is comparatively low. An analysis of the Kriging variances shows that none of the methods tested here provides a satisfactory uncertainty estimate. A suitable variable transformation is expected to improve this.
Resumo:
The coarse spacing of automatic rain gauges complicates near-real- time spatial analyses of precipitation. We test the possibility of improving such analyses by considering, in addition to the in situ measurements, the spatial covariance structure inferred from past observations with a denser network. To this end, a statistical reconstruction technique, reduced space optimal interpolation (RSOI), is applied over Switzerland, a region of complex topography. RSOI consists of two main parts. First, principal component analysis (PCA) is applied to obtain a reduced space representation of gridded high- resolution precipitation fields available for a multiyear calibration period in the past. Second, sparse real-time rain gauge observations are used to estimate the principal component scores and to reconstruct the precipitation field. In this way, climatological information at higher resolution than the near-real-time measurements is incorporated into the spatial analysis. PCA is found to efficiently reduce the dimensionality of the calibration fields, and RSOI is successful despite the difficulties associated with the statistical distribution of daily precipitation (skewness, dry days). Examples and a systematic evaluation show substantial added value over a simple interpolation technique that uses near-real-time observations only. The benefit is particularly strong for larger- scale precipitation and prominent topographic effects. Small-scale precipitation features are reconstructed at a skill comparable to that of the simple technique. Stratifying the reconstruction method by the types of weather type classifications yields little added skill. Apart from application in near real time, RSOI may also be valuable for enhancing instrumental precipitation analyses for the historic past when direct observations were sparse.
Resumo:
The need for consistent assimilation of satellite measurements for numerical weather prediction led operational meteorological centers to assimilate satellite radiances directly using variational data assimilation systems. More recently there has been a renewed interest in assimilating satellite retrievals (e.g., to avoid the use of relatively complicated radiative transfer models as observation operators for data assimilation). The aim of this paper is to provide a rigorous and comprehensive discussion of the conditions for the equivalence between radiance and retrieval assimilation. It is shown that two requirements need to be satisfied for the equivalence: (i) the radiance observation operator needs to be approximately linear in a region of the state space centered at the retrieval and with a radius of the order of the retrieval error; and (ii) any prior information used to constrain the retrieval should not underrepresent the variability of the state, so as to retain the information content of the measurements. Both these requirements can be tested in practice. When these requirements are met, retrievals can be transformed so as to represent only the portion of the state that is well constrained by the original radiance measurements and can be assimilated in a consistent and optimal way, by means of an appropriate observation operator and a unit matrix as error covariance. Finally, specific cases when retrieval assimilation can be more advantageous (e.g., when the estimate sought by the operational assimilation system depends on the first guess) are discussed.
Resumo:
Building on studies by Brayshaw et al. (2009, 2011) of the basic ingredients of the North Atlantic storm track (land-sea contrast, orography and SST), this article investigates the impact of Eurasian topography and Pacific SST anomalies on the North Pacific and Atlantic storm tracks through a hierarchy of atmospheric GCM simulations using idealised boundary conditions in the HadGAM1 model. The Himalaya-Tibet mountain complex is found to play a crucial role in shaping the North Pacific storm track. The northward deflection of the westerly flow around northern Tibet generates an extensive pool of very cold air in the north-eastern tip of the Asian continent, which strengthens the meridional temperature gradient and favours baroclinic growth in the western Pacific. The Kuroshio SST front is also instrumental in strengthening the Pacific storm track through its impact on near-surface baroclinicity, while the warm waters around Indonesia tend to weaken it through the impact on baroclinicity of stationary Rossby waves propagating poleward from the convective heating regions. Three mechanisms by which the Atlantic storm track may be affected by changes in the boundary conditions upstream of the Rockies are discussed. In the model configuration used here, stationary Rossby waves emanating from Tibet appear to weaken the North Atlantic storm track substantially, whereas those generated over the cold waters off Peru appear to strengthen it. Changes in eddy-driven surface winds over the Pacific generally appear to modify the flow over the Rocky Mountains, leading to consistent modifications in the Atlantic storm track. The evidence for each of these mechanisms is, however, ultimately equivocal in these simulations.
Resumo:
Tropical Cyclone (TC) is normally not studied at the individual level with Global Climate Models (GCMs), because the coarse grid spacing is often deemed insufficient for a realistic representation of the basic underlying processes. GCMs are indeed routinely deployed at low resolution, in order to enable sufficiently long integrations, which means that only large-scale TC proxies are diagnosed. A new class of GCMs is emerging, however, which is capable of simulating TC-type vortexes by retaining a horizontal resolution similar to that of operational NWP GCMs; their integration on the latest supercomputers enables the completion of long-term integrations. The UK-Japan Climate Collaboration and the UK-HiGEM projects have developed climate GCMs which can be run routinely for decades (with grid spacing of 60 km) or centuries (with grid spacing of 90 km); when coupled to the ocean GCM, a mesh of 1/3 degrees provides eddy-permitting resolution. The 90 km resolution model has been developed entirely by the UK-HiGEM consortium (together with its 1/3 degree ocean component); the 60 km atmospheric GCM has been developed by UJCC, in collaboration with the Met Office Hadley Centre.
Resumo:
Practically all extant work on flows over obstacle arrays, whether laboratory experiments or numerical modelling, is for cases where the oncoming wind is normal to salient faces of the obstacles. In the field, however, this is rarely the case. Here, simulations of flows at various directions over arrays of cubes representing typical urban canopy regions are presented and discussed. The computations are of both direct numerical simulation and large-eddy simulation type. Attention is concentrated on the differences in the mean flow within the canopy region arising from the different wind directions and the consequent effects on global properties such as the total surface drag, which can change very significantly—by up to a factor of three in some circumstances. It is shown that for a given Reynolds number the typical viscous forces are generally a rather larger fraction of the pressure forces (principally the drag) for non-normal than for normal wind directions and that, dependent on the surface morphology, the average flow direction deep within the canopy can be largely independent of the oncoming wind direction. Even for regular arrays of regular obstacles, a wind direction not normal to the obstacle faces can in general generate a lateral lift force (in the direction normal to the oncoming flow). The results demonstrate this and it is shown how computations in a finite domain with the oncoming flow generated by an appropriate forcing term (e.g. a pressure gradient) then lead inevitably to an oncoming wind direction aloft that is not aligned with the forcing term vector.
Resumo:
Forest canopies are important components of the terrestrial carbon budget, which has motivated a worldwide effort, FLUXNET, to measure CO2 exchange between forests and the atmosphere. These measurements are difficult to interpret and to scale up to estimate exchange across a landscape. Here we review the effects of complex terrain on the mean flow, turbulence, and scalar exchange in canopy flows, as exemplified by adjustment to forest edges and hills, including the effects of stable stratification. We focus on the fundamental fluid mechanics, in which developments in theory, measurements, and modeling, particularly through large-eddy simulation, are identifying important processes and providing scaling arguments. These developments set the stage for the development of predictive models that can be used in combination with measurements to estimate exchange at the landscape scale.
Resumo:
This study describes the turbulent processes in the upper ocean boundary layer forced by a constant surface stress in the absence of the Coriolis force using large-eddy simulation. The boundary layer that develops has a two-layer structure, a well-mixed layer above a stratified shear layer. The depth of the mixed layer is approximately constant, whereas the depth of the shear layer increases with time. The turbulent momentum flux varies approximately linearly from the surface to the base of the shear layer. There is a maximum in the production of turbulence through shear at the base of the mixed layer. The magnitude of the shear production increases with time. The increase is mainly a result of the increase in the turbulent momentum flux at the base of the mixed layer due to the increase in the depth of the boundary layer. The length scale for the shear turbulence is the boundary layer depth. A simple scaling is proposed for the magnitude of the shear production that depends on the surface forcing and the average mixed layer current. The scaling can be interpreted in terms of the divergence of a mean kinetic energy flux. A simple bulk model of the boundary layer is developed to obtain equations describing the variation of the mixed layer and boundary layer depths with time. The model shows that the rate at which the boundary layer deepens does not depend on the stratification of the thermocline. The bulk model shows that the variation in the mixed layer depth is small as long as the surface buoyancy flux is small.
Resumo:
The interactions between shear-free turbulence in two regions (denoted as + and − on either side of a nearly flat horizontal interface are shown here to be controlled by several mechanisms, which depend on the magnitudes of the ratios of the densities, ρ+/ρ−, and kinematic viscosities of the fluids, μ+/μ−, and the root mean square (r.m.s.) velocities of the turbulence, u0+/u0−, above and below the interface. This study focuses on gas–liquid interfaces so that ρ+/ρ− ≪ 1 and also on where turbulence is generated either above or below the interface so that u0+/u0− is either very large or very small. It is assumed that vertical buoyancy forces across the interface are much larger than internal forces so that the interface is nearly flat, and coupling between turbulence on either side of the interface is determined by viscous stresses. A formal linearized rapid-distortion analysis with viscous effects is developed by extending the previous study by Hunt & Graham (J. Fluid Mech., vol. 84, 1978, pp. 209–235) of shear-free turbulence near rigid plane boundaries. The physical processes accounted for in our model include both the blocking effect of the interface on normal components of the turbulence and the viscous coupling of the horizontal field across thin interfacial viscous boundary layers. The horizontal divergence in the perturbation velocity field in the viscous layer drives weak inviscid irrotational velocity fluctuations outside the viscous boundary layers in a mechanism analogous to Ekman pumping. The analysis shows the following. (i) The blocking effects are similar to those near rigid boundaries on each side of the interface, but through the action of the thin viscous layers above and below the interface, the horizontal and vertical velocity components differ from those near a rigid surface and are correlated or anti-correlated respectively. (ii) Because of the growth of the viscous layers on either side of the interface, the ratio uI/u0, where uI is the r.m.s. of the interfacial velocity fluctuations and u0 the r.m.s. of the homogeneous turbulence far from the interface, does not vary with time. If the turbulence is driven in the lower layer with ρ+/ρ− ≪ 1 and u0+/u0− ≪ 1, then uI/u0− ~ 1 when Re (=u0−L−/ν−) ≫ 1 and R = (ρ−/ρ+)(v−/v+)1/2 ≫ 1. If the turbulence is driven in the upper layer with ρ+/ρ− ≪ 1 and u0+/u0− ≫ 1, then uI/u0+ ~ 1/(1 + R). (iii) Nonlinear effects become significant over periods greater than Lagrangian time scales. When turbulence is generated in the lower layer, and the Reynolds number is high enough, motions in the upper viscous layer are turbulent. The horizontal vorticity tends to decrease, and the vertical vorticity of the eddies dominates their asymptotic structure. When turbulence is generated in the upper layer, and the Reynolds number is less than about 106–107, the fluctuations in the viscous layer do not become turbulent. Nonlinear processes at the interface increase the ratio uI/u0+ for sheared or shear-free turbulence in the gas above its linear value of uI/u0+ ~ 1/(1 + R) to (ρ+/ρ−)1/2 ~ 1/30 for air–water interfaces. This estimate agrees with the direct numerical simulation results from Lombardi, De Angelis & Bannerjee (Phys. Fluids, vol. 8, no. 6, 1996, pp. 1643–1665). Because the linear viscous–inertial coupling mechanism is still significant, the eddy motions on either side of the interface have a similar horizontal structure, although their vertical structure differs.
Resumo:
This paper describes recent variations of the North Atlantic eddy-driven jet stream and analyzes the mean response of the jet to anthropogenic forcing in climate models. Jet stream changes are analyzed both using a direct measure of the near-surface westerly wind maximum and using an EOF-based approach. This allows jet stream changes to be related to the widely used leading patterns of variability: the North Atlantic Oscillation (NAO) and East Atlantic (EA) pattern. Viewed in NAO–EA state space, isolines of jet latitude and speed resemble a distorted polar coordinate system, highlighting the dependence of the jet stream quantities on both spatial patterns. Some differences in the results of the two methods are discussed, but both approaches agree on the general characteristics of the climate models. While there is some agreement between models on a poleward shift of the jet stream in response to anthropogenic forcing, there is still considerable spread between different model projections, especially in winter. Furthermore, the model responses to forcing are often weaker than their biases when compared to a reanalysis. Diagnoses of jet stream changes can be sensitive to the methodologies used, and several aspects of this are also discussed.
Resumo:
Recent research has shown that Lighthill–Ford spontaneous gravity wave generation theory, when applied to numerical model data, can help predict areas of clear-air turbulence. It is hypothesized that this is the case because spontaneously generated atmospheric gravity waves may initiate turbulence by locally modifying the stability and wind shear. As an improvement on the original research, this paper describes the creation of an ‘operational’ algorithm (ULTURB) with three modifications to the original method: (1) extending the altitude range for which the method is effective downward to the top of the boundary layer, (2) adding turbulent kinetic energy production from the environment to the locally produced turbulent kinetic energy production, and, (3) transforming turbulent kinetic energy dissipation to eddy dissipation rate, the turbulence metric becoming the worldwide ‘standard’. In a comparison of ULTURB with the original method and with the Graphical Turbulence Guidance second version (GTG2) automated procedure for forecasting mid- and upper-level aircraft turbulence ULTURB performed better for all turbulence intensities. Since ULTURB, unlike GTG2, is founded on a self-consistent dynamical theory, it may offer forecasters better insight into the causes of the clear-air turbulence and may ultimately enhance its predictability.
Resumo:
In numerical weather prediction (NWP) data assimilation (DA) methods are used to combine available observations with numerical model estimates. This is done by minimising measures of error on both observations and model estimates with more weight given to data that can be more trusted. For any DA method an estimate of the initial forecast error covariance matrix is required. For convective scale data assimilation, however, the properties of the error covariances are not well understood. An effective way to investigate covariance properties in the presence of convection is to use an ensemble-based method for which an estimate of the error covariance is readily available at each time step. In this work, we investigate the performance of the ensemble square root filter (EnSRF) in the presence of cloud growth applied to an idealised 1D convective column model of the atmosphere. We show that the EnSRF performs well in capturing cloud growth, but the ensemble does not cope well with discontinuities introduced into the system by parameterised rain. The state estimates lose accuracy, and more importantly the ensemble is unable to capture the spread (variance) of the estimates correctly. We also find, counter-intuitively, that by reducing the spatial frequency of observations and/or the accuracy of the observations, the ensemble is able to capture the states and their variability successfully across all regimes.
Resumo:
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point and to the field covariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data.