101 resultados para Mean squared error
Resumo:
We report a nuclear magnetic resonance (NMR) study of confined water inside similar to 1.4 nm diameter single-walled carbon nanotubes (SWNTs). We show that the confined water does not freeze even up to 223 K. A pulse field gradient (PFG) NMR method is used to determine the mean squared displacement (MSD) of the water molecules inside the nanotubes at temperatures below 273 K, where the bulk water outside the nanotubes freezes and hence does not contribute to the proton NMR signal. We show that the mean squared displacement varies as the square root of time, predicted for single-file diffusion in a one-dimensional channel. We propose a qualitative understanding of our results based on available molecular dynamics simulations.
Resumo:
We use atomistic molecular dynamics (MD) simulations to study the diffusion of water molecules confined inside narrow (6,6) carbon nanorings. The water molecules form two oppositely polarized chains. It is shown that the effective interaction between these two chains is repulsive in nature. The computed mean-squared displacement (MSD) clearly shows a scaling with time
Resumo:
In uplink orthogonal frequency division multiple access (OFDMA) systems, multiuser interference (MUI) occurs due to different carrier frequency offsets (CFO) of different users at the receiver. In this paper, we present a minimum mean square error (MMSE) based approach to MUI cancellation in uplink OFDMA. We derive a recursion to approach the MMSE solution. We present a structure-wise and performance-wise comparison of this recursive MMSE solution with a linear PIC receiver as well as other detectors recently proposed in the literature. We show that the proposed recursive MMSE solution encompasses several known detectors in the literature as special cases.
Resumo:
In positron emission tomography (PET), image reconstruction is a demanding problem. Since, PET image reconstruction is an ill-posed inverse problem, new methodologies need to be developed. Although previous studies show that incorporation of spatial and median priors improves the image quality, the image artifacts such as over-smoothing and streaking are evident in the reconstructed image. In this work, we use a simple, yet powerful technique to tackle the PET image reconstruction problem. Proposed technique is based on the integration of Bayesian approach with that of finite impulse response (FIR) filter. A FIR filter is designed whose coefficients are determined based on the surface diffusion model. The resulting reconstructed image is iteratively filtered and fed back to obtain the new estimate. Experiments are performed on a simulated PET system. The results show that the proposed approach is better than recently proposed MRP algorithm in terms of image quality and normalized mean square error.
Resumo:
The mean-squared voltage fluctuation of a disordered conductor of lengthL smaller than the phase coherence lengthL ϕ, is independent of the distance between the probes. We obtain this result using the voltage additivity and the known results for the conductance fluctuation. Our results complement the recent theoretical and experimental findings.
Resumo:
The method of least squares could be used to refine an imperfectly related trial structure by adoption of one of the following two procedures: (i) using all the observed at one time or (ii) successive refinement in stages with data of increasing resolution. While the former procedure is successful in the case of trial structures which are sufficiently accurate, only the latter has been found to be successful when the mean positional error (i.e.<|[Delta]r|>) for the atoms in the trial structure is large. This paper makes a theoretical study of the variation of the R index, mean phase-angle error, etc. as a function of <|[Delta]r|> for data corresponding to different esolutions in order to find the best refinement procedure [i.e. (i) or (ii)] which could be successfully employed for refining trial structures in which <|[Delta]r|> has large, medium and low values. It is found that a trial structure for which the mean positional error is large could be refined only by the method of successive refinement with data of increasing resolution.
Resumo:
Often the soil hydraulic parameters are obtained by the inversion of measured data (e.g. soil moisture, pressure head, and cumulative infiltration, etc.). However, the inverse problem in unsaturated zone is ill-posed due to various reasons, and hence the parameters become non-unique. The presence of multiple soil layers brings the additional complexities in the inverse modelling. The generalized likelihood uncertainty estimate (GLUE) is a useful approach to estimate the parameters and their uncertainty when dealing with soil moisture dynamics which is a highly non-linear problem. Because the estimated parameters depend on the modelling scale, inverse modelling carried out on laboratory data and field data may provide independent estimates. The objective of this paper is to compare the parameters and their uncertainty estimated through experiments in the laboratory and in the field and to assess which of the soil hydraulic parameters are independent of the experiment. The first two layers in the field site are characterized by Loamy sand and Loamy. The mean soil moisture and pressure head at three depths are measured with an interval of half hour for a period of 1 week using the evaporation method for the laboratory experiment, whereas soil moisture at three different depths (60, 110, and 200 cm) is measured with an interval of 1 h for 2 years for the field experiment. A one-dimensional soil moisture model on the basis of the finite difference method was used. The calibration and validation are approximately for 1 year each. The model performance was found to be good with root mean square error (RMSE) varying from 2 to 4 cm(3) cm(-3). It is found from the two experiments that mean and uncertainty in the saturated soil moisture (theta(s)) and shape parameter (n) of van Genuchten equations are similar for both the soil types. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
A new feature-based technique is introduced to solve the nonlinear forward problem (FP) of the electrical capacitance tomography with the target application of monitoring the metal fill profile in the lost foam casting process. The new technique is based on combining a linear solution to the FP and a correction factor (CF). The CF is estimated using an artificial neural network (ANN) trained using key features extracted from the metal distribution. The CF adjusts the linear solution of the FP to account for the nonlinear effects caused by the shielding effects of the metal. This approach shows promising results and avoids the curse of dimensionality through the use of features and not the actual metal distribution to train the ANN. The ANN is trained using nine features extracted from the metal distributions as input. The expected sensors readings are generated using ANSYS software. The performance of the ANN for the training and testing data was satisfactory, with an average root-mean-square error equal to 2.2%.
Resumo:
The use of the shear wave velocity data as a field index for evaluating the liquefaction potential of sands is receiving increased attention because both shear wave velocity and liquefaction resistance are similarly influenced by many of the same factors such as void ratio, state of stress, stress history and geologic age. In this paper, the potential of support vector machine (SVM) based classification approach has been used to assess the liquefaction potential from actual shear wave velocity data. In this approach, an approximate implementation of a structural risk minimization (SRM) induction principle is done, which aims at minimizing a bound on the generalization error of a model rather than minimizing only the mean square error over the data set. Here SVM has been used as a classification tool to predict liquefaction potential of a soil based on shear wave velocity. The dataset consists the information of soil characteristics such as effective vertical stress (sigma'(v0)), soil type, shear wave velocity (V-s) and earthquake parameters such as peak horizontal acceleration (a(max)) and earthquake magnitude (M). Out of the available 186 datasets, 130 are considered for training and remaining 56 are used for testing the model. The study indicated that SVM can successfully model the complex relationship between seismic parameters, soil parameters and the liquefaction potential. In the model based on soil characteristics, the input parameters used are sigma'(v0), soil type. V-s, a(max) and M. In the other model based on shear wave velocity alone uses V-s, a(max) and M as input parameters. In this paper, it has been demonstrated that Vs alone can be used to predict the liquefaction potential of a soil using a support vector machine model. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In the direction of arrival (DOA) estimation problem, we encounter both finite data and insufficient knowledge of array characterization. It is therefore important to study how subspace-based methods perform in such conditions. We analyze the finite data performance of the multiple signal classification (MUSIC) and minimum norm (min. norm) methods in the presence of sensor gain and phase errors, and derive expressions for the mean square error (MSE) in the DOA estimates. These expressions are first derived assuming an arbitrary array and then simplified for the special case of an uniform linear array with isotropic sensors. When they are further simplified for the case of finite data only and sensor errors only, they reduce to the recent results given in [9-12]. Computer simulations are used to verify the closeness between the predicted and simulated values of the MSE.
Resumo:
In this paper, we consider robust joint linear precoder/receive filter design for multiuser multi-input multi-output (MIMO) downlink that minimizes the sum mean square error (SMSE) in the presence of imperfect channel state information (CSI). The base station is equipped with multiple transmit antennas, and each user terminal is equipped with multiple receive antennas. The CSI is assumed to be perturbed by estimation error. The proposed transceiver design is based on jointly minimizing a modified function of the MSE, taking into account the statistics of the estimation error under a total transmit power constraint. An alternating optimization algorithm, wherein the optimization is performed with respect to the transmit precoder and the receive filter in an alternating fashion, is proposed. The robustness of the proposed algorithm to imperfections in CSI is illustrated through simulations.
Resumo:
The impulse response of a typical wireless multipath channel can be modeled as a tapped delay line filter whose non-zero components are sparse relative to the channel delay spread. In this paper, a novel method of estimating such sparse multipath fading channels for OFDM systems is explored. In particular, Sparse Bayesian Learning (SBL) techniques are applied to jointly estimate the sparse channel and its second order statistics, and a new Bayesian Cramer-Rao bound is derived for the SBL algorithm. Further, in the context of OFDM channel estimation, an enhancement to the SBL algorithm is proposed, which uses an Expectation Maximization (EM) framework to jointly estimate the sparse channel, unknown data symbols and the second order statistics of the channel. The EM-SBL algorithm is able to recover the support as well as the channel taps more efficiently, and/or using fewer pilot symbols, than the SBL algorithm. To further improve the performance of the EM-SBL, a threshold-based pruning of the estimated second order statistics that are input to the algorithm is proposed, and its mean square error and symbol error rate performance is illustrated through Monte-Carlo simulations. Thus, the algorithms proposed in this paper are capable of obtaining efficient sparse channel estimates even in the presence of a small number of pilots.
Resumo:
We address the problem of designing codes for specific applications using deterministic annealing. Designing a block code over any finite dimensional space may be thought of as forming the corresponding number of clusters over the particular dimensional space. We have shown that the total distortion incurred in encoding a training set is related to the probability of correct reception over a symmetric channel. While conventional deterministic annealing make use of the Euclidean squared error distance measure, we have developed an algorithm that can be used for clustering with Hamming distance as the distance measure, which is required in the error correcting, scenario.
Resumo:
We have carried out Brownian dynamics simulations of binary mixtures of charged colloidal suspensions of two different diameter particles with varying volume fractions phi and charged impurity concentrations n(i). For a given phi, the effective temperature is lowered in many steps by reducing n(i) to see how structure and dynamics evolve. The structural quantities studied are the partial and total pair distribution functions g(tau), the static structure factors, the time average g(<(tau)over bar>), and the Wendt-Abraham parameter. The dynamic quantity is the temporal evolution of the total meansquared displacement (MSD). All these parameters show that by lowering the effective temperature at phi = 0.2, liquid freezes into a body-centered-cubic crystal whereas at phi = 0.3, a glassy state is formed. The MSD at intermediate times shows significant subdiffusive behavior whose time span increases with a reduction in the effective temperature. The mean-squared displacements for the supercooled liquid with phi = 0.3 show staircase behavior indicating a strongly cooperative jump motion of the particles.
Resumo:
A novel approach for lossless as well as lossy compression of monochrome images using Boolean minimization is proposed. The image is split into bit planes. Each bit plane is divided into windows or blocks of variable size. Each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. Compression is performed by minimizing these switching functions using ESPRESSO, a cube based two level function minimizer. The minimized cubes are encoded using a code set which satisfies the prefix property. Our technique of lossless compression involves linear prediction as a preprocessing step and has compression ratio comparable to that of JPEG lossless compression technique. Our lossy compression technique involves reducing the number of bit planes as a preprocessing step which incurs minimal loss in the information of the image. The bit planes that remain after preprocessing are compressed using our lossless compression technique based on Boolean minimization. Qualitatively one cannot visually distinguish between the original image and the lossy image and the value of mean square error is kept low. For mean square error value close to that of JPEG lossy compression technique, our method gives better compression ratio. The compression scheme is relatively slower while the decompression time is comparable to that of JPEG.