982 resultados para Spatial Variability


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Global NDVI data are routinely derived from the AVHRR, SPOT-VGT, and MODIS/Terra earth observation records for a range of applications from terrestrial vegetation monitoring to climate change modeling. This has led to a substantial interest in the harmonization of multisensor records. Most evaluations of the internal consistency and continuity of global multisensor NDVI products have focused on time-series harmonization in the spectral domain, often neglecting the spatial domain. We fill this void by applying variogram modeling (a) to evaluate the differences in spatial variability between 8-km AVHRR, 1-km SPOT-VGT, and 1-km, 500-m, and 250-m MODIS NDVI products over eight EOS (Earth Observing System) validation sites, and (b) to characterize the decay of spatial variability as a function of pixel size (i.e. data regularization) for spatially aggregated Landsat ETM+ NDVI products and a real multisensor dataset. First, we demonstrate that the conjunctive analysis of two variogram properties – the sill and the mean length scale metric – provides a robust assessment of the differences in spatial variability between multiscale NDVI products that are due to spatial (nominal pixel size, point spread function, and view angle) and non-spatial (sensor calibration, cloud clearing, atmospheric corrections, and length of multi-day compositing period) factors. Next, we show that as the nominal pixel size increases, the decay of spatial information content follows a logarithmic relationship with stronger fit value for the spatially aggregated NDVI products (R2 = 0.9321) than for the native-resolution AVHRR, SPOT-VGT, and MODIS NDVI products (R2 = 0.5064). This relationship serves as a reference for evaluation of the differences in spatial variability and length scales in multiscale datasets at native or aggregated spatial resolutions. The outcomes of this study suggest that multisensor NDVI records cannot be integrated into a long-term data record without proper consideration of all factors affecting their spatial consistency. Hence, we propose an approach for selecting the spatial resolution, at which differences in spatial variability between NDVI products from multiple sensors are minimized. This approach provides practical guidance for the harmonization of long-term multisensor datasets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We have compiled 223 sedimentary charcoal records from Australasia in order to examine the temporal and spatial variability of fire regimes during the Late Quaternary. While some of these records cover more than a full glacial cycle, here we focus on the last 70,000 years when the number of individual records in the compilation allows more robust conclusions. On orbital time scales, fire in Australasia predominantly reflects climate, with colder periods characterized by less and warmer intervals by more biomass burning. The composite record for the region also shows considerable millennial-scale variability during the last glacial interval (73.5–14.7 ka). Within the limits of the dating uncertainties of individual records, the variability shown by the composite charcoal record is more similar to the form, number and timing of Dansgaard–Oeschger cycles as observed in Greenland ice cores than to the variability expressed in the Antarctic ice-core record. The composite charcoal record suggests increased biomass burning in the Australasian region during Greenland Interstadials and reduced burning during Greenland Stadials. Millennial-scale variability is characteristic of the composite record of the sub-tropical high pressure belt during the past 21 ka, but the tropics show a somewhat simpler pattern of variability with major peaks in biomass burning around 15 ka and 8 ka. There is no distinct change in fire regime corresponding to the arrival of humans in Australia at 50 ± 10 ka and no correlation between archaeological evidence of increased human activity during the past 40 ka and the history of biomass burning. However, changes in biomass burning in the last 200 years may have been exacerbated or influenced by humans.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

How people live, work, move from place to place, consume and the technologies they use all affect heat emissions in a city which influences urban weather and climate. Here we document changes to a global anthropogenic heat flux (QF) model to enhance its spatial (30′′ × 30′′ to 0.5° × 0.5°) resolution and temporal coverage (historical, current and future). QF is estimated across Europe (1995–2015), considering changes in temperature, population and energy use. While on average QF is small (of the order 1.9–4.6 W m−2 across all the urban areas of Europe), significant spatial variability is documented (maximum 185 W m−2). Changes in energy consumption due to changes in climate are predicted to cause a 13% (11%) increase in QF on summer (winter) weekdays. The largest impact results from changes in temperature conditions which influences building energy use; for winter, with the coldest February on record, the mean flux for urban areas of Europe is 4.56 W m−2 and for summer (warmest July on record) is 2.23 W m−2. Detailed results from London highlight the spatial resolution used to model the QF is critical and must be appropriate for the application at hand, whether scientific understanding or decision making.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To optimise the placement of small wind turbines in urban areas a detailed understanding of the spatial variability of the wind resource is required. At present, due to a lack of observations, the NOABL wind speed database is frequently used to estimate the wind resource at a potential site. However, recent work has shown that this tends to overestimate the wind speed in urban areas. This paper suggests a method for adjusting the predictions of the NOABL in urban areas by considering the impact of the underlying surface on a neighbourhood scale. In which, the nature of the surface is characterised on a 1 km2 resolution using an urban morphology database. The model was then used to estimate the variability of the annual mean wind speed across Greater London at a height typical of current small wind turbine installations. Initial validation of the results suggests that the predicted wind speeds are considerably more accurate than the NOABL values. The derived wind map therefore currently provides the best opportunity to identify the neighbourhoods in Greater London at which small wind turbines yield their highest energy production. The model does not consider street scale processes, however previously derived scaling factors can be applied to relate the neighbourhood wind speed to a value at a specific rooftop site. The results showed that the wind speed predicted across London is relatively low, exceeding 4 ms-1 at only 27% of the neighbourhoods in the city. Of these sites less than 10% are within 10 km of the city centre, with the majority over 20 km from the city centre. Consequently, it is predicted that small wind turbines tend to perform better towards the outskirts of the city, therefore for cities which fit the Burgess concentric ring model, such as Greater London, ‘distance from city centre’ is a useful parameter for siting small wind turbines. However, there are a number of neighbourhoods close to the city centre at which the wind speed is relatively high and these sites can only been identified with a detailed representation of the urban surface, such as that developed in this study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The effects of a non-uniform wind field along the path of a scintillometer are investigated. Theoretical spectra are calculated for a range of scenarios where the crosswind varies in space or time and compared to the ‘ideal’ spectrum based on a constant uniform crosswind. It is verified that the refractive-index structure parameter relation with the scintillometer signal remains valid and invariant for both spatially and temporally-varying crosswinds. However, the spectral shape may change significantly preventing accurate estimation of the crosswind speed from the peak of the frequency spectrum and retrieval of the structure parameter from the plateau of the power spectrum. On comparison with experimental data, non-uniform crosswind conditions could be responsible for previously unexplained features sometimes seen in observed spectra. By accounting for the distribution of crosswind, theoretical spectra can be generated that closely replicate the observations, leading to a better understanding of the measurements. Spatial variability of wind speeds should be expected for paths other than those that are parallel to the surface and over flat, homogenous areas, whilst fluctuations in time are important for all sites.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Airborne dust affects the Earth's energy balance — an impact that is measured in terms of the implied change in net radiation (or radiative forcing, in W m-2) at the top of the atmosphere. There remains considerable uncertainty in the magnitude and sign of direct forcing by airborne dust under current climate. Much of this uncertainty stems from simplified assumptions about mineral dust-particle size, composition and shape, which are applied in remote sensing retrievals of dust characteristics and dust-cycle models. Improved estimates of direct radiative forcing by dust will require improved characterization of the spatial variability in particle characteristics to provide reliable information dust optical properties. This includes constraints on: (1) particle-size distribution, including discrimination of particle subpopulations and quantification of the amount of dust in the sub-10 µm to <0.1 µm mass fraction; (2) particle composition, specifically the abundance of iron oxides, and whether particles consist of single or multi-mineral grains; (3) particle shape, including degree of sphericity and surface roughness, as a function of size and mineralogy; and (4) the degree to which dust particles are aggregated together. The use of techniques that measure the size, composition and shape of individual particles will provide a better basis for optical modelling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Studies of climate change impacts on the terrestrial biosphere have been completed without recognition of the integrated nature of the biosphere. Improved assessment of the impacts of climate change on food and water security requires the development and use of models not only representing each component but also their interactions. To meet this requirement the Joint UK Land Environment Simulator (JULES) land surface model has been modified to include a generic parametrisation of annual crops. The new model, JULES-crop, is described and evaluation at global and site levels for the four globally important crops; wheat, soybean, maize and rice. JULES-crop demonstrates skill in simulating the inter-annual variations of yield for maize and soybean at the global and country levels, and for wheat for major spring wheat producing countries. The impact of the new parametrisation, compared to the standard configuration, on the simulation of surface heat fluxes is largely an alteration of the partitioning between latent and sensible heat fluxes during the later part of the growing season. Further evaluation at the site level shows the model captures the seasonality of leaf area index, gross primary production and canopy height better than in the standard JULES. However, this does not lead to an improvement in the simulation of sensible and latent heat fluxes. The performance of JULES-crop from both an Earth system and crop yield model perspective is encouraging. However, more effort is needed to develop the parametrisation of the model for specific applications. Key future model developments identified include the introduction of processes such as irrigation and nitrogen limitation which will enable better representation of the spatial variability in yield.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In general, particle filters need large numbers of model runs in order to avoid filter degeneracy in high-dimensional systems. The recently proposed, fully nonlinear equivalent-weights particle filter overcomes this requirement by replacing the standard model transition density with two different proposal transition densities. The first proposal density is used to relax all particles towards the high-probability regions of state space as defined by the observations. The crucial second proposal density is then used to ensure that the majority of particles have equivalent weights at observation time. Here, the performance of the scheme in a high, 65 500 dimensional, simplified ocean model is explored. The success of the equivalent-weights particle filter in matching the true model state is shown using the mean of just 32 particles in twin experiments. It is of particular significance that this remains true even as the number and spatial variability of the observations are changed. The results from rank histograms are less easy to interpret and can be influenced considerably by the parameter values used. This article also explores the sensitivity of the performance of the scheme to the chosen parameter values and the effect of using different model error parameters in the truth compared with the ensemble model runs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a critique of current methods of sampling and analyzing soils for metals in archaeological prospection. Commonly used methodologies in soil science are shown to be suitable for archaeological investigations, with a concomitant improvement in their resolution. Understanding the soil-fraction location, concentration range, and spatial distribution of autochthonous (native) soil metals is shown to be a vital precursor to archaeological-site investigations, as this is the background upon which anthropogenic deposition takes place. Nested sampling is suggested as the most cost-effective method of investigating the spatial variability in the autochthonous metal concentrations. The use of the appropriate soil horizon (or sampling depth) and point sampling are critical in the preparation of a sampling regime. Simultaneous extraction is proposed as the most efficient method of identifying the location and eventual fate of autochthonous and anthropogenic metals, respectively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Trace element measurements in PM10–2.5, PM2.5–1.0 and PM1.0–0.3 aerosol were performed with 2 h time resolution at kerbside, urban background and rural sites during the ClearfLo winter 2012 campaign in London. The environment-dependent variability of emissions was characterized using the Multilinear Engine implementation of the positive matrix factorization model, conducted on data sets comprising all three sites but segregated by size. Combining the sites enabled separation of sources with high temporal covariance but significant spatial variability. Separation of sizes improved source resolution by preventing sources occurring in only a single size fraction from having too small a contribution for the model to resolve. Anchor profiles were retrieved internally by analysing data subsets, and these profiles were used in the analyses of the complete data sets of all sites for enhanced source apportionment. A total of nine different factors were resolved (notable elements in brackets): in PM10–2.5, brake wear (Cu, Zr, Sb, Ba), other traffic-related (Fe), resuspended dust (Si, Ca), sea/road salt (Cl), aged sea salt (Na, Mg) and industrial (Cr, Ni); in PM2.5–1.0, brake wear, other traffic-related, resuspended dust, sea/road salt, aged sea salt and S-rich (S); and in PM1.0–0.3, traffic-related (Fe, Cu, Zr, Sb, Ba), resuspended dust, sea/road salt, aged sea salt, reacted Cl (Cl), S-rich and solid fuel (K, Pb). Human activities enhance the kerb-to-rural concentration gradients of coarse aged sea salt, typically considered to have a natural source, by 1.7–2.2. These site-dependent concentration differences reflect the effect of local resuspension processes in London. The anthropogenically influenced factors traffic (brake wear and other traffic-related processes), dust and sea/road salt provide further kerb-to-rural concentration enhancements by direct source emissions by a factor of 3.5–12.7. The traffic and dust factors are mainly emitted in PM10–2.5 and show strong diurnal variations with concentrations up to 4 times higher during rush hour than during night-time. Regionally influenced S-rich and solid fuel factors, occurring primarily in PM1.0–0.3, have negligible resuspension influences, and concentrations are similar throughout the day and across the regions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Previous versions of the Consortium for Small-scale Modelling (COSMO) numerical weather prediction model have used a constant sea-ice surface temperature, but observations show a high degree of variability on sub-daily timescales. To account for this, we have implemented a thermodynamic sea-ice module in COSMO and performed simulations at a resolution of 15 km and 5 km for the Laptev Sea area in April 2008. Temporal and spatial variability of surface and 2-m air temperature are verified by four automatic weather stations deployed along the edge of the western New Siberian polynya during the Transdrift XIII-2 expedition and by surface temperature charts derived from Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data. A remarkable agreement between the new model results and these observations demonstrates that the implemented sea-ice module can be applied for short-range simulations. Prescribing the polynya areas daily, our COSMO simulations provide a high-resolution and high-quality atmospheric data set for the Laptev Sea for the period 14-30 April 2008. Based on this data set, we derive a mean total sea-ice production rate of 0.53 km3/day for all Laptev Sea polynyas under the assumption that the polynyas are ice-free and a rate of 0.30 km3/day if a 10-cm-thin ice layer is assumed. Our results indicate that ice production in Laptev Sea polynyas has been overestimated in previous studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A comprehensive atmospheric boundary layer (ABL) data set was collected in eight fi eld experiments (two during each season) over open water and sea ice in the Baltic Sea during 1998–2001 with the primary objective to validate the coupled atmospheric- ice-ocean-land surface model BALTIMOS (BALTEX Integrated Model System). Measurements were taken by aircraft, ships and surface stations and cover the mean and turbulent structure of the ABL including turbulent fl uxes, radiation fl uxes, and cloud conditions. Measurement examples of the spatial variability of the ABL over the ice edge zone and of the stable ABL over open water demonstrate the wide range of ABL conditions collected and the strength of the data set which can also be used to validate other regional models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the main questions on Neoproterozoic geology regards the extent and dynamics of the glacial systems that are recorded in all continents. We present evidence for short transport distances and localized sediment sources for the Bebedouro Formation, which records Neoproterozoic glaciomarine sedimentation in the central-eastern Sao Francisco Craton (SFC), Brazil. New data are presented on clast composition, based on point counting in thin section and SHRIMP dating of pebbles and detrital zircon. Cluster analysis of clast compositional data revealed a pronounced spatial variability of clast composition on diamictite indicating the presence of individual glaciers or ice streams feeding the basin. Detrital zircon ages reveal distinct populations of Archean and Palaeoproterozoic age. The youngest detrital zircon dated at 874 +/- 9 Ma constrains the maximum depositional age of these diamictites. We interpret the provenance of the glacial diamictites to be restricted to sources inside the SFC, suggesting deposition in an environment similar to ice streams from modern, high latitude glaciers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste trabalho é dado ênfase à inclusão das incertezas na avaliação do comportamento estrutural, objetivando uma melhor representação das características do sistema e uma quantificação do significado destas incertezas no projeto. São feitas comparações entre as técnicas clássicas existentes de análise de confiabilidade, tais como FORM, Simulação Direta Monte Carlo (MC) e Simulação Monte Carlo com Amostragem por Importância Adaptativa (MCIS), e os métodos aproximados da Superfície de Resposta( RS) e de Redes Neurais Artificiais(ANN). Quando possível, as comparações são feitas salientando- se as vantagens e inconvenientes do uso de uma ou de outra técnica em problemas com complexidades crescentes. São analisadas desde formulações com funções de estado limite explícitas até formulações implícitas com variabilidade espacial de carregamento e propriedades dos materiais, incluindo campos estocásticos. É tratado, em especial, o problema da análise da confiabilidade de estruturas de concreto armado incluindo o efeito da variabilidade espacial de suas propriedades. Para tanto é proposto um modelo de elementos finitos para a representação do concreto armado que incorpora as principais características observadas neste material. Também foi desenvolvido um modelo para a geração de campos estocásticos multidimensionais não Gaussianos para as propriedades do material e que é independente da malha de elementos finitos, assim como implementadas técnicas para aceleração das avaliações estruturais presentes em qualquer das técnicas empregadas. Para o tratamento da confiabilidade através da técnica da Superfície de Resposta, o algoritmo desenvolvido por Rajashekhar et al(1993) foi implementado. Já para o tratamento através de Redes Neurais Artificias, foram desenvolvidos alguns códigos para a simulação de redes percéptron multicamada e redes com função de base radial e então implementados no algoritmo de avaliação de confiabilidade desenvolvido por Shao et al(1997). Em geral, observou-se que as técnicas de simulação tem desempenho bastante baixo em problemas mais complexos, sobressaindo-se a técnica de primeira ordem FORM e as técnicas aproximadas da Superfície de Resposta e de Redes Neurais Artificiais, embora com precisão prejudicada devido às aproximações presentes.