942 resultados para spatial correlation
Resumo:
This research is associated with the goal of the horticultural sector of the Colombian southwest, which is to obtain climatic information, specifically, to predict the monthly average temperature in sites where it has not been measured. The data correspond to monthly average temperature, and were recorded in meteorological stations at Valle del Cauca, Colombia, South America. Two components are identified in the data of this research: (1) a component due to the temporal aspects, determined by characteristics of the time series, distribution of the monthly average temperature through the months and the temporal phenomena, which increased (El Nino) and decreased (La Nina) the temperature values, and (2) a component due to the sites, which is determined for the clear differentiation of two populations, the valley and the mountains, which are associated with the pattern of monthly average temperature and with the altitude. Finally, due to the closeness between meteorological stations it is possible to find spatial correlation between data from nearby sites. In the first instance a random coefficient model without spatial covariance structure in the errors is obtained by month and geographical location (mountains and valley, respectively). Models for wet periods in mountains show a normal distribution in the errors; models for the valley and dry periods in mountains do not exhibit a normal pattern in the errors. In models of mountains and wet periods, omni-directional weighted variograms for residuals show spatial continuity. The random coefficient model without spatial covariance structure in the errors and the random coefficient model with spatial covariance structure in the errors are capturing the influence of the El Nino and La Nina phenomena, which indicates that the inclusion of the random part in the model is appropriate. The altitude variable contributes significantly in the models for mountains. In general, the cross-validation process indicates that the random coefficient model with spatial spherical and the random coefficient model with spatial Gaussian are the best models for the wet periods in mountains, and the worst model is the model used by the Colombian Institute for Meteorology, Hydrology and Environmental Studies (IDEAM) to predict temperature.
Resumo:
A wireless sensor network (WSN) is a group of sensors linked by wireless medium to perform distributed sensing tasks. WSNs have attracted a wide interest from academia and industry alike due to their diversity of applications, including home automation, smart environment, and emergency services, in various buildings. The primary goal of a WSN is to collect data sensed by sensors. These data are characteristic of being heavily noisy, exhibiting temporal and spatial correlation. In order to extract useful information from such data, as this paper will demonstrate, people need to utilise various techniques to analyse the data. Data mining is a process in which a wide spectrum of data analysis methods is used. It is applied in the paper to analyse data collected from WSNs monitoring an indoor environment in a building. A case study is given to demonstrate how data mining can be used to optimise the use of the office space in a building.
Resumo:
Rainfall can be modeled as a spatially correlated random field superimposed on a background mean value; therefore, geostatistical methods are appropriate for the analysis of rain gauge data. Nevertheless, there are certain typical features of these data that must be taken into account to produce useful results, including the generally non-Gaussian mixed distribution, the inhomogeneity and low density of observations, and the temporal and spatial variability of spatial correlation patterns. Many studies show that rigorous geostatistical analysis performs better than other available interpolation techniques for rain gauge data. Important elements are the use of climatological variograms and the appropriate treatment of rainy and nonrainy areas. Benefits of geostatistical analysis for rainfall include ease of estimating areal averages, estimation of uncertainties, and the possibility of using secondary information (e.g., topography). Geostatistical analysis also facilitates the generation of ensembles of rainfall fields that are consistent with a given set of observations, allowing for a more realistic exploration of errors and their propagation in downstream models, such as those used for agricultural or hydrological forecasting. This article provides a review of geostatistical methods used for kriging, exemplified where appropriate by daily rain gauge data from Ethiopia.
Resumo:
Mobile-to-mobile (M-to-M) communications are expected to play a crucial role in future wireless systems and networks. In this paper, we consider M-to-M multiple-input multiple-output (MIMO) maximal ratio combining system and assess its performance in spatially correlated channels. The analysis assumes double-correlated Rayleigh-and-Lognormal fading channels and is performed in terms of average symbol error probability, outage probability, and ergodic capacity. To obtain the receive and transmit spatial correlation functions needed for the performance analysis, we used a three-dimensional (3D) M-to-M MIMO channel model, which takes into account the effects of fast fading and shadowing. The expressions for the considered metrics are derived as a function of the average signal-to-noise ratio per receive antenna in closed-form and are further approximated using the recursive adaptive Simpson quadrature method. Numerical results are provided to show the effects of system parameters, such as distance between antenna elements, maximum elevation angle of scatterers, orientation angle of antenna array in the x–y plane, angle between the x–y plane and the antenna array orientation, and degree of scattering in the x–y plane, on the system performance. Copyright © 2011 John Wiley & Sons, Ltd.
Cross-layer design for MIMO systems over spatially correlated and keyhole Nakagami-m fading channels
Resumo:
Cross-layer design is a generic designation for a set of efficient adaptive transmission schemes, across multiple layers of the protocol stack, that are aimed at enhancing the spectral efficiency and increasing the transmission reliability of wireless communication systems. In this paper, one such cross-layer design scheme that combines physical layer adaptive modulation and coding (AMC) with link layer truncated automatic repeat request (T-ARQ) is proposed for multiple-input multiple-output (MIMO) systems employing orthogonal space--time block coding (OSTBC). The performance of the proposed cross-layer design is evaluated in terms of achievable average spectral efficiency (ASE), average packet loss rate (PLR) and outage probability, for which analytical expressions are derived, considering transmission over two types of MIMO fading channels, namely, spatially correlated Nakagami-m fading channels and keyhole Nakagami-m fading channels. Furthermore, the effects of the maximum number of ARQ retransmissions, numbers of transmit and receive antennas, Nakagami fading parameter and spatial correlation parameters, are studied and discussed based on numerical results and comparisons. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
Sea surface temperature (SST) can be estimated from day and night observations of the Spinning Enhanced Visible and Infra-Red Imager (SEVIRI) by optimal estimation (OE). We show that exploiting the 8.7 μm channel, in addition to the “traditional” wavelengths of 10.8 and 12.0 μm, improves OE SST retrieval statistics in validation. However, the main benefit is an improvement in the sensitivity of the SST estimate to variability in true SST. In a fair, single-pixel comparison, the 3-channel OE gives better results than the SST estimation technique presently operational within the Ocean and Sea Ice Satellite Application Facility. This operational technique is to use SST retrieval coefficients, followed by a bias-correction step informed by radiative transfer simulation. However, the operational technique has an additional “atmospheric correction smoothing”, which improves its noise performance, and hitherto had no analogue within the OE framework. Here, we propose an analogue to atmospheric correction smoothing, based on the expectation that atmospheric total column water vapour has a longer spatial correlation length scale than SST features. The approach extends the observations input to the OE to include the averaged brightness temperatures (BTs) of nearby clear-sky pixels, in addition to the BTs of the pixel for which SST is being retrieved. The retrieved quantities are then the single-pixel SST and the clear-sky total column water vapour averaged over the vicinity of the pixel. This reduces the noise in the retrieved SST significantly. The robust standard deviation of the new OE SST compared to matched drifting buoys becomes 0.39 K for all data. The smoothed OE gives SST sensitivity of 98% on average. This means that diurnal temperature variability and ocean frontal gradients are more faithfully estimated, and that the influence of the prior SST used is minimal (2%). This benefit is not available using traditional atmospheric correction smoothing.
Resumo:
Remotely sensed land cover maps are increasingly used as inputs into environmental simulation models whose outputs inform decisions and policy-making. Risks associated with these decisions are dependent on model output uncertainty, which is in turn affected by the uncertainty of land cover inputs. This article presents a method of quantifying the uncertainty that results from potential mis-classification in remotely sensed land cover maps. In addition to quantifying uncertainty in the classification of individual pixels in the map, we also address the important case where land cover maps have been upscaled to a coarser grid to suit the users’ needs and are reported as proportions of land cover type. The approach is Bayesian and incorporates several layers of modelling but is straightforward to implement. First, we incorporate data in the confusion matrix derived from an independent field survey, and discuss the appropriate way to model such data. Second, we account for spatial correlation in the true land cover map, using the remotely sensed map as a prior. Third, spatial correlation in the mis-classification characteristics is induced by modelling their variance. The result is that we are able to simulate posterior means and variances for individual sites and the entire map using a simple Monte Carlo algorithm. The method is applied to the Land Cover Map 2000 for the region of England and Wales, a map used as an input into a current dynamic carbon flux model.
Resumo:
Two methods are developed to estimate net surface energy fluxes based upon satellite-based reconstructions of radiative fluxes at the top of atmosphere and the atmospheric energy tendencies and transports from the ERA-Interim reanalysis. Method 1 applies the mass adjusted energy divergence from ERA-Interim while method 2 estimates energy divergence based upon the net energy difference at the top of atmosphere and the surface from ERA-Interim. To optimise the surface flux and its variability over ocean, the divergences over land are constrained to match the monthly area mean surface net energy flux variability derived from a simple relationship between the surface net energy flux and the surface temperature change. The energy divergences over the oceans are then adjusted to remove an unphysical residual global mean atmospheric energy divergence. The estimated net surface energy fluxes are compared with other data sets from reanalysis and atmospheric model simulations. The spatial correlation coefficients of multi-annual means between the estimations made here and other data sets are all around 0.9. There are good agreements in area mean anomaly variability over the global ocean, but discrepancies in the trend over the eastern Pacific are apparent.
Resumo:
Accurate high-resolution records of snow accumulation rates in Antarctica are crucial for estimating ice sheet mass balance and subsequent sea level change. Snowfall rates at Law Dome, East Antarctica, have been linked with regional atmospheric circulation to the mid-latitudes as well as regional Antarctic snowfall. Here, we extend the length of the Law Dome accumulation record from 750 years to 2035 years, using recent annual layer dating that extends to 22 BCE. Accumulation rates were calculated as the ratio of measured to modelled layer thicknesses, multiplied by the long-term mean accumulation rate. The modelled layer thicknesses were based on a power-law vertical strain rate profile fitted to observed annual layer thickness. The periods 380–442, 727–783 and 1970–2009 CE have above-average snow accumulation rates, while 663–704, 933–975 and 1429–1468 CE were below average, and decadal-scale snow accumulation anomalies were found to be relatively common (74 events in the 2035-year record). The calculated snow accumulation rates show good correlation with atmospheric reanalysis estimates, and significant spatial correlation over a wide expanse of East Antarctica, demonstrating that the Law Dome record captures larger-scale variability across a large region of East Antarctica well beyond the immediate vicinity of the Law Dome summit. Spectral analysis reveals periodicities in the snow accumulation record which may be related to El Niño–Southern Oscillation (ENSO) and Interdecadal Pacific Oscillation (IPO) frequencies.
Resumo:
The shuttle radar topography mission (SRTM), was flow on the space shuttle Endeavour in February 2000, with the objective of acquiring a digital elevation model of all land between 60 degrees north latitude and 56 degrees south latitude, using interferometric synthetic aperture radar (InSAR) techniques. The SRTM data are distributed at horizontal resolution of 1 arc-second (similar to 30m) for areas within the USA and at 3 arc-second (similar to 90m) resolution for the rest of the world. A resolution of 90m can be considered suitable for the small or medium-scale analysis, but it is too coarse for more detailed purposes. One alternative is to interpolate the SRTM data at a finer resolution; it will not increase the level of detail of the original digital elevation model (DEM), but it will lead to a surface where there is the coherence of angular properties (i.e. slope, aspect) between neighbouring pixels, which is an important characteristic when dealing with terrain analysis. This work intents to show how the proper adjustment of variogram and kriging parameters, namely the nugget effect and the maximum distance within which values are used in interpolation, can be set to achieve quality results on resampling SRTM data from 3"" to 1"". We present for a test area in western USA, which includes different adjustment schemes (changes in nugget effect value and in the interpolation radius) and comparisons with the original 1"" model of the area, with the national elevation dataset (NED) DEMs, and with other interpolation methods (splines and inverse distance weighted (IDW)). The basic concepts for using kriging to resample terrain data are: (i) working only with the immediate neighbourhood of the predicted point, due to the high spatial correlation of the topographic surface and omnidirectional behaviour of variogram in short distances; (ii) adding a very small random variation to the coordinates of the points prior to interpolation, to avoid punctual artifacts generated by predicted points with the same location than original data points and; (iii) using a small value of nugget effect, to avoid smoothing that can obliterate terrain features. Drainages derived from the surfaces interpolated by kriging and by splines have a good agreement with streams derived from the 1"" NED, with correct identification of watersheds, even though a few differences occur in the positions of some rivers in flat areas. Although the 1"" surfaces resampled by kriging and splines are very similar, we consider the results produced by kriging as superior, since the spline-interpolated surface still presented some noise and linear artifacts, which were removed by kriging.
Resumo:
This study covers a period when society changed from a pre-industrial agricultural society to a post-industrial service-producing society. Parallel with this social transformation, major population changes took place. In this study, we analyse how local population changes are affected by neighbouring populations. To do so we use the last 200 years of local population change that redistributed population in Sweden. We use literature to identify several different processes and spatial dependencies in the redistribution between a parish and its surrounding parishes. The analysis is based on a unique unchanged historical parish division, and we use an index of local spatial correlation to describe different kinds of spatial dependencies that have influenced the redistribution of the population. To control inherent time dependencies, we introduce a non-separable spatial temporal correlation model into the analysis of population redistribution. Hereby, several different spatial dependencies can be observed simultaneously over time. The main conclusions are that while local population changes have been highly dependent on the neighbouring populations in the 19th century, this spatial dependence have become insignificant already when two parishes is separated by 5 kilometres in the late 20th century. Another conclusion is that the time dependency in the population change is higher when the population redistribution is weak, as it currently is and as it was during the 19th century until the start of industrial revolution.
Resumo:
This paper explores the institutional change introduced by the public disclosure of an education development index (IDEB, Basic Education Development Index) in 2007 to identify the e ect of education accountability on yardstick competition in education spending for Brazilian municipalities. Our results are threefold. First, political incentives are pervasive in setting the education expenditures. The spatial strategic behavior on education spending is estimated lower for lame-ducks and for those incumbents with majority support at the city council. This suggests a strong relation between commitment and accountability which reinforces yardstick competition theory. Second, we nd a minor reduction (20%) in spatial interaction for public education spending after IDEB's disclosure | compared to the spatial correlation before the disclosure of the index. This suggests that public release of information may decrease the importance of the neighbors` counterpart information on voter`s decision. Third, exploring the discontinuity of IDEB`s disclosure rule around the cut-o of 30 students enrolled in the grade under assessment, our estimates suggest that the spatial autocorrelation | and hence yardstick competition | is reduced in 54%. Finally, an unforeseen result suggests that the disclosure of IDEB increases expenditures, more than 100% according to our estimates.
Resumo:
The main objective of this study is to apply recently developed methods of physical-statistic to time series analysis, particularly in electrical induction s profiles of oil wells data, to study the petrophysical similarity of those wells in a spatial distribution. For this, we used the DFA method in order to know if we can or not use this technique to characterize spatially the fields. After obtain the DFA values for all wells, we applied clustering analysis. To do these tests we used the non-hierarchical method called K-means. Usually based on the Euclidean distance, the K-means consists in dividing the elements of a data matrix N in k groups, so that the similarities among elements belonging to different groups are the smallest possible. In order to test if a dataset generated by the K-means method or randomly generated datasets form spatial patterns, we created the parameter Ω (index of neighborhood). High values of Ω reveals more aggregated data and low values of Ω show scattered data or data without spatial correlation. Thus we concluded that data from the DFA of 54 wells are grouped and can be used to characterize spatial fields. Applying contour level technique we confirm the results obtained by the K-means, confirming that DFA is effective to perform spatial analysis
Resumo:
In recent years, the DFA introduced by Peng, was established as an important tool capable of detecting long-range autocorrelation in time series with non-stationary. This technique has been successfully applied to various areas such as: Econophysics, Biophysics, Medicine, Physics and Climatology. In this study, we used the DFA technique to obtain the Hurst exponent (H) of the profile of electric density profile (RHOB) of 53 wells resulting from the Field School of Namorados. In this work we want to know if we can or not use H to spatially characterize the spatial data field. Two cases arise: In the first a set of H reflects the local geology, with wells that are geographically closer showing similar H, and then one can use H in geostatistical procedures. In the second case each well has its proper H and the information of the well are uncorrelated, the profiles show only random fluctuations in H that do not show any spatial structure. Cluster analysis is a method widely used in carrying out statistical analysis. In this work we use the non-hierarchy method of k-means. In order to verify whether a set of data generated by the k-means method shows spatial patterns, we create the parameter Ω (index of neighborhood). High Ω shows more aggregated data, low Ω indicates dispersed or data without spatial correlation. With help of this index and the method of Monte Carlo. Using Ω index we verify that random cluster data shows a distribution of Ω that is lower than actual cluster Ω. Thus we conclude that the data of H obtained in 53 wells are grouped and can be used to characterize space patterns. The analysis of curves level confirmed the results of the k-means
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)