46 resultados para Hermite interpolation
Resumo:
Surface-based GPS measurements of zenith path delay (ZPD) can be used to derive vertically integrated water vapor (IWV) of the atmosphere. ZPD data are collected in a global network presently consisting of 160 stations as part of the International GPS Service. In the present study, ZPD data from this network are converted into IWV using observed surface pressure and mean atmospheric water vapor column temperature obtained from the European Centre for Medium-Range Weather Forecasts' (ECMWF) operational analyses (OA). For the 4 months of January/July 2000/2001, the GPS-derived IWV values are compared to the IWV from the ECMWF OA, with a special focus on the monthly averaged difference (bias) and the standard deviation of daily differences. This comparison shows that the GPS-derived IWV values are well suited for the validation of OA of IWV. For most GPS stations, the IWV data agree quite well with the analyzed data indicating that they are both correct at these locations. Larger differences for individual days are interpreted as errors in the analyses. A dry bias in the winter is found over central United States, Canada, and central Siberia, suggesting a systematic analysis error. Larger differences were mainly found in mountain areas. These were related to representation problems and interpolation difficulties between model height and station height. In addition, the IWV comparison can be used to identify errors or problems in the observations of ZPD. This includes errors in the data itself, e.g., erroneous outlier in the measured time series, as well as systematic errors that affect all IWV values at a specific station. Such stations were excluded from the intercomparison. Finally, long-term requirements for a GPS-based water vapor monitoring system are discussed.
Resumo:
The possibility of using a time sequence of surface pressure observations in four-dimensional data assimilation is being investigated. It is shown that a linear multilevel quasi-geostrophic model can be updated successfully with surface data alone, provided the number of time levels are at least as many as the number of vertical levels. It is further demonstrated that current statistical analysis procedures are very inefficient to assimilate surface observations, and it is shown by numerical experiments that the vertical interpolation must be carried out using the structure of the most dominating baroclinic mode in order to obtain a satisfactory updating. Different possible ways towards finding a practical solution are being discussed.
Resumo:
A system for continuous data assimilation is presented and discussed. To simulate the dynamical development a channel version of a balanced barotropic model is used and geopotential (height) data are assimilated into the models computations as data become available. In the first experiment the updating is performed every 24th, 12th and 6th hours with a given network. The stations are distributed at random in 4 groups in order to simulate 4 areas with different density of stations. Optimum interpolation is performed for the difference between the forecast and the valid observations. The RMS-error of the analyses is reduced in time, and the error being smaller the more frequent the updating is performed. The updating every 6th hour yields an error in the analysis less than the RMS-error of the observation. In a second experiment the updating is performed by data from a moving satellite with a side-scan capability of about 15°. If the satellite data are analysed at every time step before they are introduced into the system the error of the analysis is reduced to a value below the RMS-error of the observation already after 24 hours and yields as a whole a better result than updating from a fixed network. If the satellite data are introduced without any modification the error of the analysis is reduced much slower and it takes about 4 days to reach a comparable result to the one where the data have been analysed.
Resumo:
With the introduction of new observing systems based on asynoptic observations, the analysis problem has changed in character. In the near future we may expect that a considerable part of meteorological observations will be unevenly distributed in four dimensions, i.e. three dimensions in space and one in time. The term analysis, or objective analysis in meteorology, means the process of interpolating observed meteorological observations from unevenly distributed locations to a network of regularly spaced grid points. Necessitated by the requirement of numerical weather prediction models to solve the governing finite difference equations on such a grid lattice, the objective analysis is a three-dimensional (or mostly two-dimensional) interpolation technique. As a consequence of the structure of the conventional synoptic network with separated data-sparse and data-dense areas, four-dimensional analysis has in fact been intensively used for many years. Weather services have thus based their analysis not only on synoptic data at the time of the analysis and climatology, but also on the fields predicted from the previous observation hour and valid at the time of the analysis. The inclusion of the time dimension in objective analysis will be called four-dimensional data assimilation. From one point of view it seems possible to apply the conventional technique on the new data sources by simply reducing the time interval in the analysis-forecasting cycle. This could in fact be justified also for the conventional observations. We have a fairly good coverage of surface observations 8 times a day and several upper air stations are making radiosonde and radiowind observations 4 times a day. If we have a 3-hour step in the analysis-forecasting cycle instead of 12 hours, which is applied most often, we may without any difficulties treat all observations as synoptic. No observation would thus be more than 90 minutes off time and the observations even during strong transient motion would fall within a horizontal mesh of 500 km * 500 km.
Resumo:
The behavior of the ensemble Kalman filter (EnKF) is examined in the context of a model that exhibits a nonlinear chaotic (slow) vortical mode coupled to a linear (fast) gravity wave of a given amplitude and frequency. It is shown that accurate recovery of both modes is enhanced when covariances between fast and slow normal-mode variables (which reflect the slaving relations inherent in balanced dynamics) are modeled correctly. More ensemble members are needed to recover the fast, linear gravity wave than the slow, vortical motion. Although the EnKF tends to diverge in the analysis of the gravity wave, the filter divergence is stable and does not lead to a great loss of accuracy. Consequently, provided the ensemble is large enough and observations are made that reflect both time scales, the EnKF is able to recover both time scales more accurately than optimal interpolation (OI), which uses a static error covariance matrix. For OI it is also found to be problematic to observe the state at a frequency that is a subharmonic of the gravity wave frequency, a problem that is in part overcome by the EnKF.However, error in themodeled gravity wave parameters can be detrimental to the performance of the EnKF and remove its implied advantages, suggesting that a modified algorithm or a method for accounting for model error is needed.
Conditioning model output statistics of regional climate model precipitation on circulation patterns
Resumo:
Dynamical downscaling of Global Climate Models (GCMs) through regional climate models (RCMs) potentially improves the usability of the output for hydrological impact studies. However, a further downscaling or interpolation of precipitation from RCMs is often needed to match the precipitation characteristics at the local scale. This study analysed three Model Output Statistics (MOS) techniques to adjust RCM precipitation; (1) a simple direct method (DM), (2) quantile-quantile mapping (QM) and (3) a distribution-based scaling (DBS) approach. The modelled precipitation was daily means from 16 RCMs driven by ERA40 reanalysis data over the 1961–2000 provided by the ENSEMBLES (ENSEMBLE-based Predictions of Climate Changes and their Impacts) project over a small catchment located in the Midlands, UK. All methods were conditioned on the entire time series, separate months and using an objective classification of Lamb's weather types. The performance of the MOS techniques were assessed regarding temporal and spatial characteristics of the precipitation fields, as well as modelled runoff using the HBV rainfall-runoff model. The results indicate that the DBS conditioned on classification patterns performed better than the other methods, however an ensemble approach in terms of both climate models and downscaling methods is recommended to account for uncertainties in the MOS methods.
Resumo:
Although there is a strong policy interest in the impacts of climate change corresponding to different degrees of climate change, there is so far little consistent empirical evidence of the relationship between climate forcing and impact. This is because the vast majority of impact assessments use emissions-based scenarios with associated socio-economic assumptions, and it is not feasible to infer impacts at other temperature changes by interpolation. This paper presents an assessment of the global-scale impacts of climate change in 2050 corresponding to defined increases in global mean temperature, using spatially-explicit impacts models representing impacts in the water resources, river flooding, coastal, agriculture, ecosystem and built environment sectors. Pattern-scaling is used to construct climate scenarios associated with specific changes in global mean surface temperature, and a relationship between temperature and sea level used to construct sea level rise scenarios. Climate scenarios are constructed from 21 climate models to give an indication of the uncertainty between forcing and response. The analysis shows that there is considerable uncertainty in the impacts associated with a given increase in global mean temperature, due largely to uncertainty in the projected regional change in precipitation. This has important policy implications. There is evidence for some sectors of a non-linear relationship between global mean temperature change and impact, due to the changing relative importance of temperature and precipitation change. In the socio-economic sectors considered here, the relationships are reasonably consistent between socio-economic scenarios if impacts are expressed in proportional terms, but there can be large differences in absolute terms. There are a number of caveats with the approach, including the use of pattern-scaling to construct scenarios, the use of one impacts model per sector, and the sensitivity of the shape of the relationships between forcing and response to the definition of the impact indicator.
Resumo:
Time series of global and regional mean Surface Air Temperature (SAT) anomalies are a common metric used to estimate recent climate change. Various techniques can be used to create these time series from meteorological station data. The degree of difference arising from using five different techniques, based on existing temperature anomaly dataset techniques, to estimate Arctic SAT anomalies over land and sea ice were investigated using reanalysis data as a testbed. Techniques which interpolated anomalies were found to result in smaller errors than non-interpolating techniques relative to the reanalysis reference. Kriging techniques provided the smallest errors in estimates of Arctic anomalies and Simple Kriging was often the best kriging method in this study, especially over sea ice. A linear interpolation technique had, on average, Root Mean Square Errors (RMSEs) up to 0.55 K larger than the two kriging techniques tested. Non-interpolating techniques provided the least representative anomaly estimates. Nonetheless, they serve as useful checks for confirming whether estimates from interpolating techniques are reasonable. The interaction of meteorological station coverage with estimation techniques between 1850 and 2011 was simulated using an ensemble dataset comprising repeated individual years (1979-2011). All techniques were found to have larger RMSEs for earlier station coverages. This supports calls for increased data sharing and data rescue, especially in sparsely observed regions such as the Arctic.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
A procedure (concurrent multiplicative-additive objective analysis scheme [CMA-OAS]) is proposed for operational rainfall estimation using rain gauges and radar data. On the basis of a concurrent multiplicative-additive (CMA) decomposition of the spatially nonuniform radar bias, within-storm variability of rainfall and fractional coverage of rainfall are taken into account. Thus both spatially nonuniform radar bias, given that rainfall is detected, and bias in radar detection of rainfall are handled. The interpolation procedure of CMA-OAS is built on Barnes' objective analysis scheme (OAS), whose purpose is to estimate a filtered spatial field of the variable of interest through a successive correction of residuals resulting from a Gaussian kernel smoother applied on spatial samples. The CMA-OAS, first, poses an optimization problem at each gauge-radar support point to obtain both a local multiplicative-additive radar bias decomposition and a regionalization parameter. Second, local biases and regionalization parameters are integrated into an OAS to estimate the multisensor rainfall at the ground level. The procedure is suited to relatively sparse rain gauge networks. To show the procedure, six storms are analyzed at hourly steps over 10,663 km2. Results generally indicated an improved quality with respect to other methods evaluated: a standard mean-field bias adjustment, a spatially variable adjustment with multiplicative factors, and ordinary cokriging.
Resumo:
The East China Sea is a hot area for typhoon waves to occur. A wave spectra assimilation model has been developed to predict the typhoon wave more accurately and operationally. This is the first time where wave data from Taiwan have been used to predict typhoon wave along the mainland China coast. The two-dimensional spectra observed in Taiwan northeast coast modify the wave field output by SWAN model through the technology of optimal interpolation (OI) scheme. The wind field correction is not involved as it contributes less than a quarter of the correction achieved by assimilation of waves. The initialization issue for assimilation is discussed. A linear evolution law for noise in the wave field is derived from the SWAN governing equations. A two-dimensional digital low-pass filter is used to obtain the initialized wave fields. The data assimilation model is optimized during the typhoon Sinlaku. During typhoons Krosa and Morakot, data assimilation significantly improves the low frequency wave energy and wave propagation direction in Taiwan coast. For the far-field region, the assimilation model shows an expected ability of improving typhoon wave forecast as well, as data assimilation enhances the low frequency wave energy. The proportion of positive assimilation indexes is over 81% for all the periods of comparison. The paper also finds that the impact of data assimilation on the far-field region depends on the state of the typhoon developing and the swell propagation direction.
Resumo:
Palaeoclimates across Europe for 6000 y BP were estimated from pollen data using the modern pollen analogue technique constrained with lake-level data. The constraint consists of restricting the set of modern pollen samples considered as analogues of the fossil samples to those locations where the implied change in annual precipitation minus evapotranspiration (P–E) is consistent with the regional change in moisture balance as indicated by lakes. An artificial neural network was used for the spatial interpolation of lake-level changes to the pollen sites, and for mapping palaeoclimate anomalies. The climate variables reconstructed were mean temperature of the coldest month (T c ), growing degree days above 5 °C (GDD), moisture availability expressed as the ratio of actual to equilibrium evapotranspiration (α), and P–E. The constraint improved the spatial coherency of the reconstructed palaeoclimate anomalies, especially for P–E. The reconstructions indicate clear spatial and seasonal patterns of Holocene climate change, which can provide a quantitative benchmark for the evaluation of palaeoclimate model simulations. Winter temperatures (T c ) were 1–3 K greater than present in the far N and NE of Europe, but 2–4 K less than present in the Mediterranean region. Summer warmth (GDD) was greater than present in NW Europe (by 400–800 K day at the highest elevations) and in the Alps, but >400 K day less than present at lower elevations in S Europe. P–E was 50–250 mm less than present in NW Europe and the Alps, but α was 10–15% greater than present in S Europe and P–E was 50–200 mm greater than present in S and E Europe.
Resumo:
A one-dimensional surface energy-balance lake model, coupled to a thermodynamic model of lake ice, is used to simulate variations in the temperature of and evaporation from three Estonian lakes: Karujärv, Viljandi and Kirjaku. The model is driven by daily climate data, derived by cubic-spline interpolation from monthly mean data, and was run for periods of 8 years (Kirjaku) up to 30 years (Viljandi). Simulated surface water temperature is in good agreement with observations: mean differences between simulated and observed temperatures are from −0.8°C to +0.1°C. The simulated duration of snow and ice cover is comparable with observed. However, the model generally underpredicts ice thickness and overpredicts snow depth. Sensitivity analyses suggest that the model results are robust across a wide range (0.1–2.0 m−1) of lake extinction coefficient: surface temperature differs by less than 0.5°C between extreme values of the extinction coefficient. The model results are more sensitive to snow and ice albedos. However, changing the snow (0.2–0.9) and ice (0.15–0.55) albedos within realistic ranges does not improve the simulations of snow depth and ice thickness. The underestimation of ice thickness is correlated with the overestimation of snow cover, since a thick snow layer insulates the ice and limits ice formation. The overestimation of snow cover results from the assumption that all the simulated winter precipitation occurs as snow, a direct consequence of using daily climate data derived by interpolation from mean monthly data.
Resumo:
The question is addressed whether using unbalanced updates in ocean-data assimilation schemes for seasonal forecasting systems can result in a relatively poor simulation of zonal currents. An assimilation scheme, where temperature observations are used for updating only the density field, is compared to a scheme where updates of density field and zonal velocities are related by geostrophic balance. This is done for an equatorial linear shallow-water model. It is found that equatorial zonal velocities can be detoriated if velocity is not updated in the assimilation procedure. Adding balanced updates to the zonal velocity is shown to be a simple remedy for the shallow-water model. Next, optimal interpolation (OI) schemes with balanced updates of the zonal velocity are implemented in two ocean general circulation models. First tests indicate a beneficial impact on equatorial upper-ocean zonal currents.
Resumo:
Eddy covariance has been used in urban areas to evaluate the net exchange of CO2 between the surface and the atmosphere. Typically, only the vertical flux is measured at a height 2–3 times that of the local roughness elements; however, under conditions of relatively low instability, CO2 may accumulate in the airspace below the measurement height. This can result in inaccurate emissions estimates if the accumulated CO2 drains away or is flushed upwards during thermal expansion of the boundary layer. Some studies apply a single height storage correction; however, this requires the assumption that the response of the CO2 concentration profile to forcing is constant with height. Here a full seasonal cycle (7th June 2012 to 3rd June 2013) of single height CO2 storage data calculated from concentrations measured at 10 Hz by open path gas analyser are compared to a data set calculated from a concurrent switched vertical profile measured (2 Hz, closed path gas analyser) at 10 heights within and above a street canyon in central London. The assumption required for the former storage determination is shown to be invalid. For approximately regular street canyons at least one other measurement is required. Continuous measurements at fewer locations are shown to be preferable to a spatially dense, switched profile, as temporal interpolation is ineffective. The majority of the spectral energy of the CO2 storage time series was found to be between 0.001 and 0.2 Hz (500 and 5 s respectively); however, sampling frequencies of 2 Hz and below still result in significantly lower CO2 storage values. An empirical method of correcting CO2 storage values from under-sampled time series is proposed.