101 resultados para Mean squared error
Resumo:
Eleven GCMs (BCCR-BCCM2.0, INGV-ECHAM4, GFDL2.0, GFDL2.1, GISS, IPSL-CM4, MIROC3, MRI-CGCM2, NCAR-PCMI, UKMO-HADCM3 and UKMO-HADGEM1) were evaluated for India (covering 73 grid points of 2.5 degrees x 2.5 degrees) for the climate variable `precipitation rate' using 5 performance indicators. Performance indicators used were the correlation coefficient, normalised root mean square error, absolute normalised mean bias error, average absolute relative error and skill score. We used a nested bias correction methodology to remove the systematic biases in GCM simulations. The Entropy method was employed to obtain weights of these 5 indicators. Ranks of the 11 GCMs were obtained through a multicriterion decision-making outranking method, PROMETHEE-2 (Preference Ranking Organisation Method of Enrichment Evaluation). An equal weight scenario (assigning 0.2 weight for each indicator) was also used to rank the GCMs. An effort was also made to rank GCMs for 4 river basins (Godavari, Krishna, Mahanadi and Cauvery) in peninsular India. The upper Malaprabha catchment in Karnataka, India, was chosen to demonstrate the Entropy and PROMETHEE-2 methods. The Spearman rank correlation coefficient was employed to assess the association between the ranking patterns. Our results suggest that the ensemble of GFDL2.0, MIROC3, BCCR-BCCM2.0, UKMO-HADCM3, MPIECHAM4 and UKMO-HADGEM1 is suitable for India. The methodology proposed can be extended to rank GCMs for any selected region.
Resumo:
In this paper, we propose a multiple-input multiple-output (MIMO) receiver algorithm that exploits channel hardening that occurs in large MIMO channels. Channel hardening refers to the phenomenon where the off-diagonal terms of the matrix become increasingly weaker compared to the diagonal terms as the size of the channel gain matrix increases. Specifically, we propose a message passing detection (MPD) algorithm which works with the real-valued matched filtered received vector (whose signal term becomes, where is the transmitted vector), and uses a Gaussian approximation on the off-diagonal terms of the matrix. We also propose a simple estimation scheme which directly obtains an estimate of (instead of an estimate of), which is used as an effective channel estimate in the MPD algorithm. We refer to this receiver as the channel hardening-exploiting message passing (CHEMP) receiver. The proposed CHEMP receiver achieves very good performance in large-scaleMIMO systems (e.g., in systems with 16 to 128 uplink users and 128 base station antennas). For the considered large MIMO settings, the complexity of the proposed MPD algorithm is almost the same as or less than that of the minimum mean square error (MMSE) detection. This is because the MPD algorithm does not need a matrix inversion. It also achieves a significantly better performance compared to MMSE and other message passing detection algorithms using MMSE estimate of. Further, we design optimized irregular low density parity check (LDPC) codes specific to the considered large MIMO channel and the CHEMP receiver through EXIT chart matching. The LDPC codes thus obtained achieve improved coded bit error rate performance compared to off-the-shelf irregular LDPC codes.
Resumo:
A design methodology based on the Minimum Bit Error Ratio (MBER) framework is proposed for a non-regenerative Multiple-Input Multiple-Output (MIMO) relay-aided system to determine various linear parameters. We consider both the Relay-Destination (RD) as well as the Source-Relay-Destination (SRD) link design based on this MBER framework, including the pre-coder, the Amplify-and-Forward (AF) matrix and the equalizer matrix of our system. It has been shown in the previous literature that MBER based communication systems are capable of reducing the Bit-Error-Ratio (BER) compared to their Linear Minimum Mean Square Error (LMMSE) based counterparts. We design a novel relay-aided system using various signal constellations, ranging from QPSK to the general M-QAM and M-PSK constellations. Finally, we propose its sub-optimal versions for reducing the computational complexity imposed. Our simulation results demonstrate that the proposed scheme indeed achieves a significant BER reduction over the existing LMMSE scheme.
Resumo:
We consider the problem of parameter estimation from real-valued multi-tone signals. Such problems arise frequently in spectral estimation. More recently, they have gained new importance in finite-rate-of-innovation signal sampling and reconstruction. The annihilating filter is a key tool for parameter estimation in these problems. The standard annihilating filter design has to be modified to result in accurate estimation when dealing with real sinusoids, particularly because the real-valued nature of the sinusoids must be factored into the annihilating filter design. We show that the constraint on the annihilating filter can be relaxed by making use of the Hilbert transform. We refer to this approach as the Hilbert annihilating filter approach. We show that accurate parameter estimation is possible by this approach. In the single-tone case, the mean-square error performance increases by 6 dB for signal-to-noise ratio (SNR) greater than 0 dB. We also present experimental results in the multi-tone case, which show that a significant improvement (about 6 dB) is obtained when the parameters are close to 0 or pi. In the mid-frequency range, the improvement is about 2 to 3 dB.
Resumo:
Wavelength Division Multiplexing (WDM) techniques overfibrelinks helps to exploit the high bandwidth capacity of single mode fibres. A typical WDM link consisting of laser source, multiplexer/demultiplexer, amplifier and detectoris considered for obtaining the open loop gain model of the link. The methodology used here is to obtain individual component models using mathematical and different curve fitting techniques. These individual models are then combined to obtain the WDM link model. The objective is to deduce a single variable model for the WDM link in terms of input current to system. Thus it provides a black box solution for a link. The Root Mean Square Error (RMSE) associated with each of the approximated models is given for comparison. This will help the designer to select the suitable WDM link model during a complex link design.
Resumo:
Computing the maximum of sensor readings arises in several environmental, health, and industrial monitoring applications of wireless sensor networks (WSNs). We characterize the several novel design trade-offs that arise when green energy harvesting (EH) WSNs, which promise perpetual lifetimes, are deployed for this purpose. The nodes harvest renewable energy from the environment for communicating their readings to a fusion node, which then periodically estimates the maximum. For a randomized transmission schedule in which a pre-specified number of randomly selected nodes transmit in a sensor data collection round, we analyze the mean absolute error (MAE), which is defined as the mean of the absolute difference between the maximum and that estimated by the fusion node in each round. We optimize the transmit power and the number of scheduled nodes to minimize the MAE, both when the nodes have channel state information (CSI) and when they do not. Our results highlight how the optimal system operation depends on the EH rate, availability and cost of acquiring CSI, quantization, and size of the scheduled subset. Our analysis applies to a general class of sensor reading and EH random processes.
Resumo:
The impulse response of wireless channels between the N-t transmit and N-r receive antennas of a MIMO-OFDM system are group approximately sparse (ga-sparse), i.e., NtNt the channels have a small number of significant paths relative to the channel delay spread and the time-lags of the significant paths between transmit and receive antenna pairs coincide. Often, wireless channels are also group approximately cluster-sparse (gac-sparse), i.e., every ga-sparse channel consists of clusters, where a few clusters have all strong components while most clusters have all weak components. In this paper, we cast the problem of estimating the ga-sparse and gac-sparse block-fading and time-varying channels in the sparse Bayesian learning (SBL) framework and propose a bouquet of novel algorithms for pilot-based channel estimation, and joint channel estimation and data detection, in MIMO-OFDM systems. The proposed algorithms are capable of estimating the sparse wireless channels even when the measurement matrix is only partially known. Further, we employ a first-order autoregressive modeling of the temporal variation of the ga-sparse and gac-sparse channels and propose a recursive Kalman filtering and smoothing (KFS) technique for joint channel estimation, tracking, and data detection. We also propose novel, parallel-implementation based, low-complexity techniques for estimating gac-sparse channels. Monte Carlo simulations illustrate the benefit of exploiting the gac-sparse structure in the wireless channel in terms of the mean square error (MSE) and coded bit error rate (BER) performance.
Resumo:
Changes in the protonation and deprotonation of amino acid residues in proteins play a key role in many biological processes and pathways. Here, we report calculations of the free-energy profile for the protonation deprotonation reaction of the 20 canonical alpha amino acids in aqueous solutions using ab initio Car-Parrinello molecular dynamics simulations coupled with metad-ynamics sampling. We show here that the calculated change in free energy of the dissociation reaction provides estimates of the multiple pK(a) values of the amino acids that are in good agreement with experiment. We use the bond-length-dependent number of the protons coordinated to the hydroxyl oxygen of the carboxylic and the amine groups as the collective variables to explore the free-energy profiles of the Bronsted acid-base chemistry of amino acids in aqueous solutions. We ensure that the amino acid undergoing dissociation is solvated by at least three hydrations shells with all water molecules included in the simulations. The method works equally well for amino acids with neutral, acidic and basic side chains and provides estimates of the multiple pK(a) values with a mean relative error, with respect to experimental results, of 0.2 pK(a) units.
Resumo:
This study presents a comprehensive evaluation of five widely used multisatellite precipitation estimates (MPEs) against 1 degrees x 1 degrees gridded rain gauge data set as ground truth over India. One decade observations are used to assess the performance of various MPEs (Climate Prediction Center (CPC)-South Asia data set, CPC Morphing Technique (CMORPH), Precipitation Estimation From Remotely Sensed Information Using Artificial Neural Networks, Tropical Rainfall Measuring Mission's Multisatellite Precipitation Analysis (TMPA-3B42), and Global Precipitation Climatology Project). All MPEs have high detection skills of rain with larger probability of detection (POD) and smaller ``missing'' values. However, the detection sensitivity differs from one product (and also one region) to the other. While the CMORPH has the lowest sensitivity of detecting rain, CPC shows highest sensitivity and often overdetects rain, as evidenced by large POD and false alarm ratio and small missing values. All MPEs show higher rain sensitivity over eastern India than western India. These differential sensitivities are found to alter the biases in rain amount differently. All MPEs show similar spatial patterns of seasonal rain bias and root-mean-square error, but their spatial variability across India is complex and pronounced. The MPEs overestimate the rainfall over the dry regions (northwest and southeast India) and severely underestimate over mountainous regions (west coast and northeast India), whereas the bias is relatively small over the core monsoon zone. Higher occurrence of virga rain due to subcloud evaporation and possible missing of small-scale convective events by gauges over the dry regions are the main reasons for the observed overestimation of rain by MPEs. The decomposed components of total bias show that the major part of overestimation is due to false precipitation. The severe underestimation of rain along the west coast is attributed to the predominant occurrence of shallow rain and underestimation of moderate to heavy rain by MPEs. The decomposed components suggest that the missed precipitation and hit bias are the leading error sources for the total bias along the west coast. All evaluation metrics are found to be nearly equal in two contrasting monsoon seasons (southwest and northeast), indicating that the performance of MPEs does not change with the season, at least over southeast India. Among various MPEs, the performance of TMPA is found to be better than others, as it reproduced most of the spatial variability exhibited by the reference.
Resumo:
Using coherent light interrogating a turbid object perturbed by a focused ultrasound (US) beam, we demonstrate localized measurement of dynamics in the focal region, termed the region-of-interest (ROI), from the decay of the modulation in intensity autocorrelation of light. When the ROI contains a pipe flow, the decay is shown to be sensitive to the average flow velocity from which the mean-squared displacement (MSD) of the scattering centers in the flow can be estimated. While the MSD estimated is seen to be an order of magnitude higher than that obtainable through the usual diffusing wave spectroscopy (DWS) without the US, it is seen to be more accurate as verified by the volume flow estimated from it. It is further observed that, whereas the MSD from the localized measurement grows with time as tau(alpha) with alpha approximate to 1.65, without using the US, a is seen to be much less. Moreover, with the local measurement, this super-diffusive nature of the pipe flow is seen to persist longer, i.e., over a wider range of initial tau, than with the unassisted DWS. The reason for the super-diffusivity of flow, i.e., alpha < 2, in the ROI is the presence of a fluctuating (thermodynamically nonequilibrium) component in the dynamics induced by the US forcing. Beyond this initial range, both methods measure MSDs that rise linearly with time, indicating that ballistic and near-ballistic photons hardly capture anything beyond the background Brownian motion. (C) 2015 Optical Society of America
Resumo:
Purpose: A prior image based temporally constrained reconstruction ( PITCR) algorithm was developed for obtaining accurate temperature maps having better volume coverage, and spatial, and temporal resolution than other algorithms for highly undersampled data in magnetic resonance (MR) thermometry. Methods: The proposed PITCR approach is an algorithm that gives weight to the prior image and performs accurate reconstruction in a dynamic imaging environment. The PITCR method is compared with the temporally constrained reconstruction (TCR) algorithm using pork muscle data. Results: The PITCR method provides superior performance compared to the TCR approach with highly undersampled data. The proposed approach is computationally expensive compared to the TCR approach, but this could be overcome by the advantage of reconstructing with fewer measurements. In the case of reconstruction of temperature maps from 16% of fully sampled data, the PITCR approach was 1.57x slower compared to the TCR approach, while the root mean square error using PITCR is 0.784 compared to 2.815 with the TCR scheme. Conclusions: The PITCR approach is able to perform more accurate reconstructions of temperature maps compared to the TCR approach with highly undersampled data in MR guided high intensity focused ultrasound. (C) 2015 American Association of Physicists in Medicine.