971 resultados para Monte-carlo Simulations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compressive Sampling Matching Pursuit (CoSaMP) is one of the popular greedy methods in the emerging field of Compressed Sensing (CS). In addition to the appealing empirical performance, CoSaMP has also splendid theoretical guarantees for convergence. In this paper, we propose a modification in CoSaMP to adaptively choose the dimension of search space in each iteration, using a threshold based approach. Using Monte Carlo simulations, we show that this modification improves the reconstruction capability of the CoSaMP algorithm in clean as well as noisy measurement cases. From empirical observations, we also propose an optimum value for the threshold to use in applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Future space-based gravity wave (GW) experiments such as the Big Bang Observatory (BBO), with their excellent projected, one sigma angular resolution, will measure the luminosity distance to a large number of GW sources to high precision, and the redshift of the single galaxies in the narrow solid angles towards the sources will provide the redshifts of the gravity wave sources. One sigma BBO beams contain the actual source in only 68% of the cases; the beams that do not contain the source may contain a spurious single galaxy, leading to misidentification. To increase the probability of the source falling within the beam, larger beams have to be considered, decreasing the chances of finding single galaxies in the beams. Saini et al. T.D. Saini, S.K. Sethi, and V. Sahni, Phys. Rev. D 81, 103009 (2010)] argued, largely analytically, that identifying even a small number of GW source galaxies furnishes a rough distance-redshift relation, which could be used to further resolve sources that have multiple objects in the angular beam. In this work we further develop this idea by introducing a self-calibrating iterative scheme which works in conjunction with Monte Carlo simulations to determine the luminosity distance to GW sources with progressively greater accuracy. This iterative scheme allows one to determine the equation of state of dark energy to within an accuracy of a few percent for a gravity wave experiment possessing a beam width an order of magnitude larger than BBO (and therefore having a far poorer angular resolution). This is achieved with no prior information about the nature of dark energy from other data sets such as type Ia supernovae, baryon acoustic oscillations, cosmic microwave background, etc. DOI:10.1103/PhysRevD.87.083001

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fast and efficient channel estimation is key to achieving high data rate performance in mobile and vehicular communication systems, where the channel is fast time-varying. To this end, this work proposes and optimizes channel-dependent training schemes for reciprocal Multiple-Input Multiple-Output (MIMO) channels with beamforming (BF) at the transmitter and receiver. First, assuming that Channel State Information (CSI) is available at the receiver, a channel-dependent Reverse Channel Training (RCT) signal is proposed that enables efficient estimation of the BF vector at the transmitter with a minimum training duration of only one symbol. In contrast, conventional orthogonal training requires a minimum training duration equal to the number of receive antennas. A tight approximation to the capacity lower bound on the system is derived, which is used as a performance metric to optimize the parameters of the RCT. Next, assuming that CSI is available at the transmitter, a channel-dependent forward-link training signal is proposed and its power and duration are optimized with respect to an approximate capacity lower bound. Monte Carlo simulations illustrate the significant performance improvement offered by the proposed channel-dependent training schemes over the existing channel-agnostic orthogonal training schemes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by experiments on Josephson junction arrays in a magnetic field and ultracold interacting atoms in an optical lattice in the presence of a ``synthetic'' orbital magnetic field, we study the ``fully frustrated'' Bose-Hubbard model and quantum XY model with half a flux quantum per lattice plaquette. Using Monte Carlo simulations and the density matrix renormalization group method, we show that these kinetically frustrated boson models admit three phases at integer filling: a weakly interacting chiral superfluid phase with staggered loop currents which spontaneously break time-reversal symmetry, a conventional Mott insulator at strong coupling, and a remarkable ``chiral Mott insulator'' (CMI) with staggered loop currents sandwiched between them at intermediate correlation. We discuss how the CMI state may be viewed as an exciton condensate or a vortex supersolid, study a Jastrow variational wave function which captures its correlations, present results for the boson momentum distribution across the phase diagram, and consider various experimental implications of our phase diagram. Finally, we consider generalizations to a staggered flux Bose-Hubbard model and a two-dimensional (2D) version of the CMI in weakly coupled ladders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the free energy barriers for homogeneous crystal nucleation in a system that exhibits a eutectic point are computed using Monte Carlo simulations. The system studied is a binary hard sphere mixture with a diameter ratio of 0.85 between the smaller and larger hard spheres. The simulations of crystal nucleation are performed for the entire range of fluid compositions. The free energy barrier is found to be the highest near the eutectic point and is nearly five times that for the pure fluid, which slows down the nucleation rate by a factor of 10(-31). These free energy barriers are some of highest ever computed using simulations. For most of the conditions studied, the composition of the critical nucleus corresponds to either one of the two thermodynamically stable solid phases. However, near the eutectic point, the nucleation barrier is lowest for the formation of the metastable random hexagonal closed packed (rhcp) solid phase with composition lying in the two-phase region of the phase diagram. The fluid to solid phase transition is hypothesized to proceed via formation of a metastable rhcp phase followed by a phase separation into respective stable fcc solid phases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wavelet coefficients based on spatial wavelets are used as damage indicators to identify the damage location as well as the size of the damage in a laminated composite beam with localized matrix cracks. A finite element model of the composite beam is used in conjunction with a matrix crack based damage model to simulate the damaged composite beam structure. The modes of vibration of the beam are analyzed using the wavelet transform in order to identify the location and the extent of the damage by sensing the local perturbations at the damage locations. The location of the damage is identified by a sudden change in spatial distribution of wavelet coefficients. Monte Carlo Simulations (MCS) are used to investigate the effect of ply level uncertainty in composite material properties such as ply longitudinal stiffness, transverse stiffness, shear modulus and Poisson's ratio on damage detection parameter, wavelet coefficient. In this study, numerical simulations are done for single and multiple damage cases. It is observed that spatial wavelets can be used as a reliable damage detection tool for composite beams with localized matrix cracks which can result from low velocity impact damage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current study analyzes the leachate distribution in the Orchard Hills Landfill, Davis Junction, Illinois, using a two-phase flow model to assess the influence of variability in hydraulic conductivity on the effectiveness of the existing leachate recirculation system and its operations through reliability analysis. Numerical modeling, using finite-difference code, is performed with due consideration to the spatial variation of hydraulic conductivity of the municipal solid waste (MSW). The inhomogeneous and anisotropic waste condition is assumed because it is a more realistic representation of the MSW. For the reliability analysis, the landfill is divided into 10 MSW layers with different mean values of vertical and horizontal hydraulic conductivities (decreasing from top to bottom), and the parametric study is performed by taking the coefficients of variation (COVs) as 50, 100, 150, and 200%. Monte Carlo simulations are performed to obtain statistical information (mean and COV) of output parameters of the (1) wetted area of the MSW, (2) maximum induced pore pressure, and (3) leachate outflow. The results of the reliability analysis are used to determine the influence of hydraulic conductivity on the effectiveness of the leachate recirculation and are discussed in the light of a deterministic approach. The study is useful in understanding the efficiency of the leachate recirculation system. (C) 2013 American Society of Civil Engineers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we consider the problem of finding a spectrum hole of a specified bandwidth in a given wide band of interest. We propose a new, simple and easily implementable sub-Nyquist sampling scheme for signal acquisition and a spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy in the frequency domain by testing a group of adjacent subbands in a single test. The sampling scheme deliberately introduces aliasing during signal acquisition, resulting in a signal that is the sum of signals from adjacent sub-bands. Energy-based hypothesis tests are used to provide an occupancy decision over the group of subbands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes. We extend this framework to a multi-stage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Further, we provide the analytical means to optimize the hypothesis tests with respect to the detection thresholds, number of samples and group size to minimize the detection delay under a given error rate constraint. Depending on the sparsity and SNR, the proposed algorithms can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme; the latter is in fact a special case of the group test when the group size is set to 1. We validate our analytical results via Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a nonlinear suboptimal detector whose performance in heavy-tailed noise is significantly better than that of the matched filter is proposed. The detector consists of a nonlinear wavelet denoising filter to enhance the signal-to-noise ratio, followed by a replica correlator. Performance of the detector is investigated through an asymptotic theoretical analysis as well as Monte Carlo simulations. The proposed detector offers the following advantages over the optimal (in the Neyman-Pearson sense) detector: it is easier to implement, and it is more robust with respect to error in modeling the probability distribution of noise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers the design of a power-controlled reverse channel training (RCT) scheme for spatial multiplexing (SM)-based data transmission along the dominant modes of the channel in a time-division duplex (TDD) multiple-input and multiple-output (MIMO) system, when channel knowledge is available at the receiver. A channel-dependent power-controlled RCT scheme is proposed, using which the transmitter estimates the beamforming (BF) vectors required for the forward-link SM data transmission. Tight approximate expressions for 1) the mean square error (MSE) in the estimate of the BF vectors, and 2) a capacity lower bound (CLB) for an SM system, are derived and used to optimize the parameters of the training sequence. Moreover, an extension of the channel-dependent training scheme and the data rate analysis to a multiuser scenario with M user terminals is presented. For the single-mode BF system, a closed-form expression for an upper bound on the average sum data rate is derived, which is shown to scale as ((L-c - L-B,L- tau)/L-c) log logM asymptotically in M, where L-c and L-B,L- tau are the channel coherence time and training duration, respectively. The significant performance gain offered by the proposed training sequence over the conventional constant-power orthogonal RCT sequence is demonstrated using Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compressive Sensing theory combines the signal sampling and compression for sparse signals resulting in reduction in sampling rate and computational complexity of the measurement system. In recent years, many recovery algorithms were proposed to reconstruct the signal efficiently. Look Ahead OMP (LAOMP) is a recently proposed method which uses a look ahead strategy and performs significantly better than other greedy methods. In this paper, we propose a modification to the LAOMP algorithm to choose the look ahead parameter L adaptively, thus reducing the complexity of the algorithm, without compromising on the performance. The performance of the algorithm is evaluated through Monte Carlo simulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We develop a strong-coupling (t << U) expansion technique for calculating the density profile for bosonic atoms trapped in an optical lattice with an overall harmonic trap at finite temperature and finite on-site interaction in the presence of superfluid regions. Our results match well with quantum Monte Carlo simulations at finite temperature. We also show that the superfluid order parameter never vanishes in the trap due to the proximity effect. Our calculations for the scaled density in the vacuum-to-superfluid transition agree well with the experimental data for appropriate temperatures. We present calculations for the entropy per particle as a function of temperature which can be used to calibrate the temperature in experiments. We also discuss issues connected with the demonstration of universal quantum critical scaling in the experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerous algorithms have been proposed recently for sparse signal recovery in Compressed Sensing (CS). In practice, the number of measurements can be very limited due to the nature of the problem and/or the underlying statistical distribution of the non-zero elements of the sparse signal may not be known a priori. It has been observed that the performance of any sparse signal recovery algorithm depends on these factors, which makes the selection of a suitable sparse recovery algorithm difficult. To take advantage in such situations, we propose to use a fusion framework using which we employ multiple sparse signal recovery algorithms and fuse their estimates to get a better estimate. Theoretical results justifying the performance improvement are shown. The efficacy of the proposed scheme is demonstrated by Monte Carlo simulations using synthetic sparse signals and ECG signals selected from MIT-BIH database.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of randomly parametered and randomly driven nonlinear vibrating systems is considered. The study combines two Monte Carlo variance reduction strategies into a single framework to tackle the problem. The first of these strategies is based on the application of the Girsanov transformation to account for the randomness in dynamic excitations, and the second approach is fashioned after the subset simulation method to deal with randomness in system parameters. Illustrative examples include study of single/multi degree of freedom linear/non-linear inelastic randomly parametered building frame models driven by stationary/non-stationary, white/filtered white noise support acceleration. The estimated reliability measures are demonstrated to compare well with results from direct Monte Carlo simulations. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is well known that the impulse response of a wide-band wireless channel is approximately sparse, in the sense that it has a small number of significant components relative to the channel delay spread. In this paper, we consider the estimation of the unknown channel coefficients and its support in OFDM systems using a sparse Bayesian learning (SBL) framework for exact inference. In a quasi-static, block-fading scenario, we employ the SBL algorithm for channel estimation and propose a joint SBL (J-SBL) and a low-complexity recursive J-SBL algorithm for joint channel estimation and data detection. In a time-varying scenario, we use a first-order autoregressive model for the wireless channel and propose a novel, recursive, low-complexity Kalman filtering-based SBL (KSBL) algorithm for channel estimation. We generalize the KSBL algorithm to obtain the recursive joint KSBL algorithm that performs joint channel estimation and data detection. Our algorithms can efficiently recover a group of approximately sparse vectors even when the measurement matrix is partially unknown due to the presence of unknown data symbols. Moreover, the algorithms can fully exploit the correlation structure in the multiple measurements. Monte Carlo simulations illustrate the efficacy of the proposed techniques in terms of the mean-square error and bit error rate performance.