939 resultados para Channel estimation error


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Wyner-Ziv video coding (WZVC) rate distortion performance is highly dependent on the quality of the side information, an estimation of the original frame, created at the decoder. This paper, characterizes the WZVC efficiency when motion compensated frame interpolation (MCFI) techniques are used to generate the side information, a difficult problem in WZVC especially because the decoder only has available some reference decoded frames. The proposed WZVC compression efficiency rate model relates the power spectral of the estimation error to the accuracy of the MCFI motion field. Then, some interesting conclusions may be derived related to the impact of the motion field smoothness and the correlation to the true motion trajectories on the compression performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fingerprinting is an indoor location technique, based on wireless networks, where data stored during the offline phase is compared with data collected by the mobile device during the online phase. In most of the real-life scenarios, the mobile node used throughout the offline phase is different from the mobile nodes that will be used during the online phase. This means that there might be very significant differences between the Received Signal Strength values acquired by the mobile node and the ones stored in the Fingerprinting Map. As a consequence, this difference between RSS values might contribute to increase the location estimation error. One possible solution to minimize these differences is to adapt the RSS values, acquired during the online phase, before sending them to the Location Estimation Algorithm. Also the internal parameters of the Location Estimation Algorithms, for example the weights of the Weighted k-Nearest Neighbour, might need to be tuned for every type of terminal. This paper focuses both approaches, using Direct Search optimization methods to adapt the Received Signal Strength and to tune the Location Estimation Algorithm parameters. As a result it was possible to decrease the location estimation error originally obtained without any calibration procedure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nowadays there is an increase of location-aware mobile applications. However, these applications only retrieve location with a mobile device's GPS chip. This means that in indoor or in more dense environments these applications don't work properly. To provide location information everywhere a pedestrian Inertial Navigation System (INS) is typically used, but these systems can have a large estimation error since, in order to turn the system wearable, they use low-cost and low-power sensors. In this work a pedestrian INS is proposed, where force sensors were included to combine with the accelerometer data in order to have a better detection of the stance phase of the human gait cycle, which leads to improvements in location estimation. Besides sensor fusion an information fusion architecture is proposed, based on the information from GPS and several inertial units placed on the pedestrian body, that will be used to learn the pedestrian gait behavior to correct, in real-time, the inertial sensors errors, thus improving location estimation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work is devoted to the broadband wireless transmission techniques, which are serious candidates to be implemented in future broadband wireless and cellular systems, aiming at providing high and reliable data transmission and concomitantly high mobility. In order to cope with doubly-selective channels, receiver structures based on OFDM and SC-FDE block transmission techniques, are proposed, which allow cost-effective implementations, using FFT-based signal processing. The first subject to be addressed is the impact of the number of multipath components, and the diversity order, on the asymptotic performance of OFDM and SC-FDE, in uncoded and for different channel coding schemes. The obtained results show that the number of relevant separable multipath components is a key element that influences the performance of OFDM and SC-FDE schemes. Then, the improved estimation and detection performance of OFDM-based broadcasting systems, is introduced employing SFN (Single Frequency Network) operation. An initial coarse channel is obtained with resort to low-power training sequences estimation, and an iterative receiver with joint detection and channel estimation is presented. The achieved results have shown very good performance, close to that with perfect channel estimation. The next topic is related to SFN systems, devoting special attention to time-distortion effects inherent to these networks. Typically, the SFN broadcast wireless systems employ OFDM schemes to cope with severely time-dispersive channels. However, frequency errors, due to CFO, compromises the orthogonality between subcarriers. As an alternative approach, the possibility of using SC-FDE schemes (characterized by reduced envelope fluctuations and higher robustness to carrier frequency errors) is evaluated, and a technique, employing joint CFO estimation and compensation over the severe time-distortion effects, is proposed. Finally, broadband mobile wireless systems, in which the relative motion between the transmitter and receiver induces Doppler shift which is different or each propagation path, is considered, depending on the angle of incidence of that path in relation to the direction of travel. This represents a severe impairment in wireless digital communications systems, since that multipath propagation combined with the Doppler effects, lead to drastic and unpredictable fluctuations of the envelope of the received signal, severely affecting the detection performance. The channel variations due this effect are very difficult to estimate and compensate. In this work we propose a set of SC-FDE iterative receivers implementing efficient estimation and tracking techniques. The performance results show that the proposed receivers have very good performance, even in the presence of significant Doppler spread between the different groups of multipath components.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The optimization of the pilot overhead in single-user wireless fading channels is investigated, and the dependence of this overhead on various system parameters of interest (e.g., fading rate, signal-to-noise ratio) is quantified. The achievable pilot-based spectral efficiency is expanded with respect to the fading rate about the no-fading point, which leads to an accurate order expansion for the pilot overhead. This expansion identifies that the pilot overhead, as well as the spectral efficiency penalty with respect to a reference system with genie-aided CSI (channel state information) at the receiver, depend on the square root of the normalized Doppler frequency. It is also shown that the widely-used block fading model is a special case of more accurate continuous fading models in terms of the achievable pilot-based spectral efficiency. Furthermore, it is established that the overhead optimization for multiantenna systems is effectively the same as for single-antenna systems with the normalized Doppler frequency multiplied by the number of transmit antennas.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Expressions relating spectral efficiency, power, and Doppler spectrum, are derived for Rayleigh-faded wireless channels with Gaussian signal transmission. No side information on the state of the channel is assumed at the receiver. Rather, periodic reference signals are postulated in accordance with the functioning of most wireless systems. The analysis relies on a well-established lower bound, generally tight and asymptotically exact at low SNR. In contrast with most previous studies, which relied on block-fading channel models, a continuous-fading model is adopted. This embeds the Doppler spectrum directly in the derived expressions, imbuing them with practical significance. Closed-form relationships are obtained for the popular Clarke-Jakes spectrum and informative expansions, valid for arbitrary spectra, are found for the low- and high-power regimes. While the paper focuses on scalar channels, the extension to multiantenna settings is also discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The vast territories that have been radioactively contaminated during the 1986 Chernobyl accident provide a substantial data set of radioactive monitoring data, which can be used for the verification and testing of the different spatial estimation (prediction) methods involved in risk assessment studies. Using the Chernobyl data set for such a purpose is motivated by its heterogeneous spatial structure (the data are characterized by large-scale correlations, short-scale variability, spotty features, etc.). The present work is concerned with the application of the Bayesian Maximum Entropy (BME) method to estimate the extent and the magnitude of the radioactive soil contamination by 137Cs due to the Chernobyl fallout. The powerful BME method allows rigorous incorporation of a wide variety of knowledge bases into the spatial estimation procedure leading to informative contamination maps. Exact measurements (?hard? data) are combined with secondary information on local uncertainties (treated as ?soft? data) to generate science-based uncertainty assessment of soil contamination estimates at unsampled locations. BME describes uncertainty in terms of the posterior probability distributions generated across space, whereas no assumption about the underlying distribution is made and non-linear estimators are automatically incorporated. Traditional estimation variances based on the assumption of an underlying Gaussian distribution (analogous, e.g., to the kriging variance) can be derived as a special case of the BME uncertainty analysis. The BME estimates obtained using hard and soft data are compared with the BME estimates obtained using only hard data. The comparison involves both the accuracy of the estimation maps using the exact data and the assessment of the associated uncertainty using repeated measurements. Furthermore, a comparison of the spatial estimation accuracy obtained by the two methods was carried out using a validation data set of hard data. Finally, a separate uncertainty analysis was conducted that evaluated the ability of the posterior probabilities to reproduce the distribution of the raw repeated measurements available in certain populated sites. The analysis provides an illustration of the improvement in mapping accuracy obtained by adding soft data to the existing hard data and, in general, demonstrates that the BME method performs well both in terms of estimation accuracy as well as in terms estimation error assessment, which are both useful features for the Chernobyl fallout study.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The central message of this paper is that nobody should be using the samplecovariance matrix for the purpose of portfolio optimization. It containsestimation error of the kind most likely to perturb a mean-varianceoptimizer. In its place, we suggest using the matrix obtained from thesample covariance matrix through a transformation called shrinkage. Thistends to pull the most extreme coefficients towards more central values,thereby systematically reducing estimation error where it matters most.Statistically, the challenge is to know the optimal shrinkage intensity,and we give the formula for that. Without changing any other step in theportfolio optimization process, we show on actual stock market data thatshrinkage reduces tracking error relative to a benchmark index, andsubstantially increases the realized information ratio of the activeportfolio manager.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper surveys asset allocation methods that extend the traditional approach. An important feature of the the traditional approach is that measures the risk and return tradeoff in terms of mean and variance of final wealth. However, there are also other important features that are not always made explicit in terms of investor s wealth, information, and horizon: The investor makes a single portfolio choice based only on the mean and variance of her final financial wealth and she knows the relevant parameters in that computation. First, the paper describes traditional portfolio choice based on four basic assumptions, while the rest of the sections extend those assumptions. Each section will describe the corresponding equilibrium implications in terms of portfolio advice and asset pricing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, the problem of frame-level symboltiming acquisition for UWB signals is addressed. The main goalis the derivation of a frame-level timing estimator which does notrequire any prior knowledge of neither the transmitted symbolsnor the received template waveform. The independence withrespect to the received waveform is of special interest in UWBcommunication systems, where a fast and accurate estimation ofthe end-to-end channel response is a challenging and computationallydemanding task. The proposed estimator is derived under theunconditional maximum likelihood criterion, and because of thelow power of UWB signals, the low-SNR assumption is adopted. Asa result, an optimal frame-level timing estimator is derived whichoutperforms existing acquisition methods in low-SNR scenarios.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

ABSTRACT This study aimed to compare thematic maps of soybean yield for different sampling grids, using geostatistical methods (semivariance function and kriging). The analysis was performed with soybean yield data in t ha-1 in a commercial area with regular grids with distances between points of 25x25 m, 50x50 m, 75x75 m, 100x100 m, with 549, 188, 66 and 44 sampling points respectively; and data obtained by yield monitors. Optimized sampling schemes were also generated with the algorithm called Simulated Annealing, using maximization of the overall accuracy measure as a criterion for optimization. The results showed that sample size and sample density influenced the description of the spatial distribution of soybean yield. When the sample size was increased, there was an increased efficiency of thematic maps used to describe the spatial variability of soybean yield (higher values of accuracy indices and lower values for the sum of squared estimation error). In addition, more accurate maps were obtained, especially considering the optimized sample configurations with 188 and 549 sample points.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The focus in this thesis is to study both technical and economical possibilities of novel on-line condition monitoring techniques in underground low voltage distribution cable networks. This thesis consists of literature study about fault progression mechanisms in modern low voltage cables, laboratory measurements to determine the base and restrictions of novel on-line condition monitoring methods, and economic evaluation, based on fault statistics and information gathered from Finnish distribution system operators. This thesis is closely related to master’s thesis “Channel Estimation and On-line Diagnosis of LV Distribution Cabling”, which focuses more on the actual condition monitoring methods and signal theory behind them.