982 resultados para Large Extra Dimensions


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we are interested in high spectral efficiency multicode CDMA systems with large number of users employing single/multiple transmit antennas and higher-order modulation. In particular, we consider a local neighborhood search based multiuser detection algorithm which offers very good performance and complexity, suited for systems with large number of users employing M-QAM/M-PSK. We apply the algorithm on the chip matched filter output vector. We demonstrate near-single user (SU) performance of the algorithm in CDMA systems with large number of users using 4-QAM/16-QAM/64-QAM/8-PSK on AWGN, frequency-flat, and frequency-selective fading channels. We further show that the algorithm performs very well in multicode multiple-input multiple-output (MIMO) CDMA systems as well, outperforming other linear detectors and interference cancelers reported in the literature for such systems. The per-symbol complexity of the search algorithm is O(K2n2tn2cM), K: number of users, nt: number of transmit antennas at each user, nc: number of spreading codes multiplexed on each transmit antenna, M: modulation alphabet size, making the algorithm attractive for multiuser detection in large-dimension multicode MIMO-CDMA systems with M-QAM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Low-complexity near-optimal detection of signals in MIMO systems with large number (tens) of antennas is getting increased attention. In this paper, first, we propose a variant of Markov chain Monte Carlo (MCMC) algorithm which i) alleviates the stalling problem encountered in conventional MCMC algorithm at high SNRs, and ii) achieves near-optimal performance for large number of antennas (e.g., 16×16, 32×32, 64×64 MIMO) with 4-QAM. We call this proposed algorithm as randomized MCMC (R-MCMC) algorithm. Second, we propose an other algorithm based on a random selection approach to choose candidate vectors to be tested in a local neighborhood search. This algorithm, which we call as randomized search (RS) algorithm, also achieves near-optimal performance for large number of antennas with 4-QAM. The complexities of the proposed R-MCMC and RS algorithms are quadratic/sub-quadratic in number of transmit antennas, which are attractive for detection in large-MIMO systems. We also propose message passing aided R-MCMC and RS algorithms, which are shown to perform well for higher-order QAM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parabolized stability equation (PSE) models are being deve loped to predict the evolu-tion of low-frequency, large-scale wavepacket structures and their radiated sound in high-speed turbulent round jets. Linear PSE wavepacket models were previously shown to be in reasonably good agreement with the amplitude envelope and phase measured using a microphone array placed just outside the jet shear layer. 1,2 Here we show they also in very good agreement with hot-wire measurements at the jet center line in the potential core,for a different set of experiments. 3 When used as a model source for acoustic analogy, the predicted far field noise radiation is in reasonably good agreement with microphone measurements for aft angles where contributions from large -scale structures dominate the acoustic field. Nonlinear PSE is then employed in order to determine the relative impor-tance of the mode interactions on the wavepackets. A series of nonlinear computations with randomized initial conditions are use in order to obtain bounds for the evolution of the modes in the natural turbulent jet flow. It was found that n onlinearity has a very limited impact on the evolution of the wavepackets for St≥0. 3. Finally, the nonlinear mechanism for the generation of a low-frequency mode as the difference-frequency mode 4,5 of two forced frequencies is investigated in the scope of the high Reynolds number jets considered in this paper.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a simple model that can be used to account for the rheological behaviour observed in recent experiments on micellar gels. The model combines attachment detachment kinetics with stretching due to shear, and shows well-defined jammed and flowing states. The large-deviation function (LDF) for the coarse-grained velocity becomes increasingly non-quadratic as the applied force F is increased, in a range near the yield threshold. The power fluctuations are found to obey a steady-state fluctuation relation (FR) at small F. However, the FR is violated when F is near the transition from the flowing to the jammed state although the LDF still exists; the antisymmetric part of the LDF is found to be nonlinear in its argument. Our approach suggests that large fluctuations and motion in a direction opposite to an imposed force are likely to occur in a wider class of systems near yielding.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a hardware-software hybrid technique for modular multiplication over large binary fields. The technique involves application of Karatsuba-Ofman algorithm for polynomial multiplication and a novel technique for reduction. The proposed reduction technique is based on the popular repeated multiplication technique and Barrett reduction. We propose a new design of a parallel polynomial multiplier that serves as a hardware accelerator for large field multiplications. We show that the proposed reduction technique, accelerated using the modified polynomial multiplier, achieves significantly higher performance compared to a purely software technique and other hybrid techniques. We also show that the hybrid accelerated approach to modular field multiplication is significantly faster than the Montgomery algorithm based integrated multiplication approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The breakdown of the Stokes-Einstein (SE) relation between diffusivity and viscosity at low temperatures is considered to be one of the hallmarks of glassy dynamics in liquids. Theoretical analyses relate this breakdown with the presence of heterogeneous dynamics, and by extension, with the fragility of glass formers. We perform an investigation of the breakdown of the SE relation in 2, 3, and 4 dimensions in order to understand these interrelations. Results from simulations of model glass formers show that the degree of the breakdown of the SE relation decreases with increasing spatial dimensionality. The breakdown itself can be rationalized via the difference between the activation free energies for diffusivity and viscosity (or relaxation times) in the Adam-Gibbs relation in three and four dimensions. The behavior in two dimensions also can be understood in terms of a generalized Adam-Gibbs relation that is observed in previous work. We calculate various measures of heterogeneity of dynamics and find that the degree of the SE breakdown and measures of heterogeneity of dynamics are generally well correlated but with some exceptions. The two-dimensional systems we study show deviations from the pattern of behavior of the three-and four-dimensional systems both at high and low temperatures. The fragility of the studied liquids is found to increase with spatial dimensionality, contrary to the expectation based on the association of fragility with heterogeneous dynamics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of mining targeted association rules over multidimensional market-basket data. Here, each transaction has, in addition to the set of purchased items, ancillary dimension attributes associated with it. Based on these dimensions, transactions can be visualized as distributed over cells of an n-dimensional cube. In this framework, a targeted association rule is of the form {X -> Y} R, where R is a convex region in the cube and X. Y is a traditional association rule within region R. We first describe the TOARM algorithm, based on classical techniques, for identifying targeted association rules. Then, we discuss the concepts of bottom-up aggregation and cubing, leading to the CellUnion technique. This approach is further extended, using notions of cube-count interleaving and credit-based pruning, to derive the IceCube algorithm. Our experiments demonstrate that IceCube consistently provides the best execution time performance, especially for large and complex data cubes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that every graph of maximum degree 3 can be represented as the intersection graph of axis parallel boxes in three dimensions, that is, every vertex can be mapped to an axis parallel box such that two boxes intersect if and only if their corresponding vertices are adjacent. In fact, we construct a representation in which any two intersecting boxes just touch at their boundaries. Further, this construction can be realized in linear time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The role of crystallite size and clustering in influencing the stability of the structures of a large tetragonality ferroelectric system 0.6BiFeO(3)-0.4PbTiO(3) was investigated. The system exhibits cubic phase for a crystallite size similar to 25 nm, three times larger than the critical size reported for one of its end member PbTiO3. With increased degree of clustering for the same average crystallite size, partial stabilization of the ferroelectric tetragonal phase takes place. The results suggest that clustering helps in reducing the depolarization energy without the need for increasing the crystallite size of free particles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The presence of a large number of spectral bands in the hyperspectral images increases the capability to distinguish between various physical structures. However, they suffer from the high dimensionality of the data. Hence, the processing of hyperspectral images is applied in two stages: dimensionality reduction and unsupervised classification techniques. The high dimensionality of the data has been reduced with the help of Principal Component Analysis (PCA). The selected dimensions are classified using Niche Hierarchical Artificial Immune System (NHAIS). The NHAIS combines the splitting method to search for the optimal cluster centers using niching procedure and the merging method is used to group the data points based on majority voting. Results are presented for two hyperspectral images namely EO-1 Hyperion image and Indian pines image. A performance comparison of this proposed hierarchical clustering algorithm with the earlier three unsupervised algorithms is presented. From the results obtained, we deduce that the NHAIS is efficient.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 2004 Sumatra-Andaman earthquake was unprecedented in terms of its magnitude (M-w 9.2), rupture length along the plate boundary (1300 km) and size of the resultant tsunami. Since 2004, efforts are being made to improve the understanding of the seismic hazard in the Sumatra-Andaman subduction zone in terms of recurrence patterns of major earthquakes and tsunamis. It is reasonable to assume that previous earthquake events in the Myanmar Andaman segment must be preserved in the geological record in the form of seismo-turbidite sequences. Here we present the prospects of conducting deep ocean palaeoseismicity investigations in order to refine the quantification of the recurrence pattern of large subduction-zone earthquakes along the Andaman-Myanmar arc. Our participation in the Sagar Kanya cruise SK-273 (in June 2010) was to test the efficacy of such a survey. The primary mission of the cruise, along a short length (300 km) of the Sumatra Andaman subduction front was to collect bathymetric data of the ocean floor trenchward of the Andaman Islands. The agenda of our piggyback survey was to fix potential coring sites that might preserve seismo-turbidite deposits. In this article we present the possibilities and challenges of such an exercise and our first-hand experience of such a preliminary survey. This account will help future researchers with similar scientific objectives who would want to survey the deep ocean archives of this region for evidence of extreme events like major earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Daily rainfall datasets of 10 years (1998-2007) of Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) version 6 and India Meteorological Department (IMD) gridded rain gauge have been compared over the Indian landmass, both in large and small spatial scales. On the larger spatial scale, the pattern correlation between the two datasets on daily scales during individual years of the study period is ranging from 0.4 to 0.7. The correlation improved significantly (similar to 0.9) when the study was confined to specific wet and dry spells each of about 5-8 days. Wavelet analysis of intraseasonal oscillations (ISO) of the southwest monsoon rainfall show the percentage contribution of the major two modes (30-50 days and 10-20 days), to be ranging respectively between similar to 30-40% and 5-10% for the various years. Analysis of inter-annual variability shows the satellite data to be underestimating seasonal rainfall by similar to 110 mm during southwest monsoon and overestimating by similar to 150 mm during northeast monsoon season. At high spatio-temporal scales, viz., 1 degrees x1 degrees grid, TMPA data do not correspond to ground truth. We have proposed here a new analysis procedure to assess the minimum spatial scale at which the two datasets are compatible with each other. This has been done by studying the contribution to total seasonal rainfall from different rainfall rate windows (at 1 mm intervals) on different spatial scales (at daily time scale). The compatibility spatial scale is seen to be beyond 5 degrees x5 degrees average spatial scale over the Indian landmass. This will help to decide the usability of TMPA products, if averaged at appropriate spatial scales, for specific process studies, e.g., cloud scale, meso scale or synoptic scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.