49 resultados para Large-scale Structure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a low-complexity algorithm based on Markov chain Monte Carlo (MCMC) technique for signal detection on the uplink in large scale multiuser multiple input multiple output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and similar number of uplink users. The algorithm employs a randomized sampling method (which makes a probabilistic choice between Gibbs sampling and random sampling in each iteration) for detection. The proposed algorithm alleviates the stalling problem encountered at high SNRs in conventional MCMC algorithm and achieves near-optimal performance in large systems with M-QAM. A novel ingredient in the algorithm that is responsible for achieving near-optimal performance at low complexities is the joint use of a randomized MCMC (R-MCMC) strategy coupled with a multiple restart strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for large number of BS antennas and users (e.g., 64, 128, 256 BS antennas/users).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Elastic Net Regularizers have shown much promise in designing sparse classifiers for linear classification. In this work, we propose an alternating optimization approach to solve the dual problems of elastic net regularized linear classification Support Vector Machines (SVMs) and logistic regression (LR). One of the sub-problems turns out to be a simple projection. The other sub-problem can be solved using dual coordinate descent methods developed for non-sparse L2-regularized linear SVMs and LR, without altering their iteration complexity and convergence properties. Experiments on very large datasets indicate that the proposed dual coordinate descent - projection (DCD-P) methods are fast and achieve comparable generalization performance after the first pass through the data, with extremely sparse models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a multiple-input multiple-output (MIMO) receiver algorithm that exploits channel hardening that occurs in large MIMO channels. Channel hardening refers to the phenomenon where the off-diagonal terms of the matrix become increasingly weaker compared to the diagonal terms as the size of the channel gain matrix increases. Specifically, we propose a message passing detection (MPD) algorithm which works with the real-valued matched filtered received vector (whose signal term becomes, where is the transmitted vector), and uses a Gaussian approximation on the off-diagonal terms of the matrix. We also propose a simple estimation scheme which directly obtains an estimate of (instead of an estimate of), which is used as an effective channel estimate in the MPD algorithm. We refer to this receiver as the channel hardening-exploiting message passing (CHEMP) receiver. The proposed CHEMP receiver achieves very good performance in large-scaleMIMO systems (e.g., in systems with 16 to 128 uplink users and 128 base station antennas). For the considered large MIMO settings, the complexity of the proposed MPD algorithm is almost the same as or less than that of the minimum mean square error (MMSE) detection. This is because the MPD algorithm does not need a matrix inversion. It also achieves a significantly better performance compared to MMSE and other message passing detection algorithms using MMSE estimate of. Further, we design optimized irregular low density parity check (LDPC) codes specific to the considered large MIMO channel and the CHEMP receiver through EXIT chart matching. The LDPC codes thus obtained achieve improved coded bit error rate performance compared to off-the-shelf irregular LDPC codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors report a detailed investigation of the flicker noise (1/f noise) in graphene films obtained from chemical vapour deposition (CVD) and chemical reduction of graphene oxide. The authors find that in the case of polycrystalline graphene films grown by CVD, the grain boundaries and other structural defects are the dominant source of noise by acting as charged trap centres resulting in huge increase in noise as compared with that of exfoliated graphene. A study of the kinetics of defects in hydrazine-reduced graphene oxide (RGO) films as a function of the extent of reduction showed that for longer hydrazine treatment time strong localised crystal defects are introduced in RGO, whereas the RGO with shorter hydrazine treatment showed the presence of large number of mobile defects leading to higher noise amplitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, using idealized climate model simulations, we investigate the biogeophysical effects of large-scale deforestation on monsoon regions. We find that the remote forcing from large-scale deforestation in the northern middle and high latitudes shifts the Intertropical Convergence Zone southward. This results in a significant decrease in precipitation in the Northern Hemisphere monsoon regions (East Asia, North America, North Africa, and South Asia) and moderate precipitation increases in the Southern Hemisphere monsoon regions (South Africa, South America, and Australia). The magnitude of the monsoonal precipitation changes depends on the location of deforestation, with remote effects showing a larger influence than local effects. The South Asian Monsoon region is affected the most, with 18% decline in precipitation over India. Our results indicate that any comprehensive assessment of afforestation/reforestation as climate change mitigation strategies should carefully evaluate the remote effects on monsoonal precipitation alongside the large local impacts on temperatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial modulation (SM) is attractive for multiantenna wireless communications. SM uses multiple transmit antenna elements but only one transmit radio frequency (RF) chain. In SM, in addition to the information bits conveyed through conventional modulation symbols (e.g., QAM), the index of the active transmit antenna also conveys information bits. In this paper, we establish that SM has significant signal-to-noise (SNR) advantage over conventional modulation in large-scale multiuser (multiple-input multiple-output) MIMO systems. Our new contribution in this paper addresses the key issue of large-dimension signal processing at the base station (BS) receiver (e.g., signal detection) in large-scale multiuser SM-MIMO systems, where each user is equipped with multiple transmit antennas (e.g., 2 or 4 antennas) but only one transmit RF chain, and the BS is equipped with tens to hundreds of (e.g., 128) receive antennas. Specifically, we propose two novel algorithms for detection of large-scale SM-MIMO signals at the BS; one is based on message passing and the other is based on local search. The proposed algorithms achieve very good performance and scale well. For the same spectral efficiency, multiuser SM-MIMO outperforms conventional multiuser MIMO (recently being referred to as massive MIMO) by several dBs. The SNR advantage of SM-MIMO over massive MIMO can be attributed to: (i) because of the spatial index bits, SM-MIMO can use a lower-order QAM alphabet compared to that in massive MIMO to achieve the same spectral efficiency, and (ii) for the same spectral efficiency and QAM size, massive MIMO will need more spatial streams per user which leads to increased spatial interference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. At such low MTBFs, employing periodic checkpointing alone will result in low efficiency because of the high number of application failures resulting in large amount of lost work due to rollbacks. In such scenarios, it is highly necessary to have proactive fault tolerance mechanisms that can help avoid significant number of failures. In this work, we have developed a mechanism for proactive fault tolerance using partial replication of a set of application processes. Our fault tolerance framework adaptively changes the set of replicated processes periodically based on failure predictions to avoid failures. We have developed an MPI prototype implementation, PAREP-MPI that allows changing the replica set. We have shown that our strategy involving adaptive process replication significantly outperforms existing mechanisms providing up to 20 percent improvement in application efficiency even for exascale systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generalized spatial modulation (GSM) uses n(t) transmit antenna elements but fewer transmit radio frequency (RF) chains, n(rf). Spatial modulation (SM) and spatial multiplexing are special cases of GSM with n(rf) = 1 and n(rf) = n(t), respectively. In GSM, in addition to conveying information bits through n(rf) conventional modulation symbols (for example, QAM), the indices of the n(rf) active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR). The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of 10(-3). Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that the removal of angular momentum is possible in the presence of large-scale magnetic stresses in geometrically thick, advective, sub-Keplerian accretion flows around black holes in steady state, in the complete absence of alpha-viscosity. The efficiency of such an angular momentum transfer could be equivalent to that of alpha-viscosity with alpha = 0.01-0.08. Nevertheless, the required field is well below its equipartition value, leading to a magnetically stable disk flow. This is essentially important in order to describe the hard spectral state of the sources when the flow is non/sub-Keplerian. We show in our simpler 1.5 dimensional, vertically averaged disk model that the larger the vertical-gradient of the azimuthal component of the magnetic field is, the stronger the rate of angular momentum transfer becomes, which in turn may lead to a faster rate of outflowing matter. Finding efficient angular momentum transfer in black hole disks via magnetic stresses alone, is very interesting when the generic origin of alpha-viscosity is still being explored.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, the fine-scale structure of the diurnal variability of ground-based lightning is systematically compared with satellite-based rain. At the outset, it is shown that tropical variability of lightning exhibits a prominent diurnal mode, much like rain. A comparison of the geographical distribution of the timing of the diurnal maximum shows that there is very good agreement between the two observables over continental and coastal regions throughout the tropics. Following this global tropical comparison, we focus on two regions, Borneo and equatorial South America, both of which show the interplay between oceanward and landward propagations of the phase of the diurnal maximum. Over Borneo, both rain and lightning clearly show a climatological cycle of ``breathing in'' (afternoon to early morning) and ``breathing out'' (morning to early afternoon). Over the equatorial east coast of South America, landward propagation is noticed in rain and lightning from early afternoon to early morning. Along the Pacific coast of South America, both rain and lightning show oceanward propagation. Though qualitatively consistent, over both regions the propagation is seen to extend further in rainfall. Additionally, given that lightning highlights vigorous convection, the timing of its diurnal maximum often precedes that of rainfall in the convective life cycle. (C) 2015 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The tropical easterly jet (TEJ) is a prominent atmospheric circulation feature observed during the Asian summer monsoon. It is generally assumed that sensible heating over the Tibetan Plateau directly influences the location of the TEJ. However, other studies have suggested the importance of latent heating in determining the jet location. In this paper, the relative importance of latent heating on the maintenance of the TEJ is explored through simulations with a general circulation model. The simulation of the TEJ by the Community Atmosphere Model, version 3.1 is discussed in detail. These simulations showed that the location of the TEJ is well correlated with the location of the precipitation. Significant zonal shifts in the location of the precipitation resulted in similar shifts in the zonal location of the TEJ. These zonal shifts had minimal effect on the large-scale structure of the jet. Further, provided that precipitation patterns were relatively unchanged, orography did not directly impact the location of the TEJ. These changes were robust even with changes in the cumulus parameterization. This suggests the potential important role of latent heating in determining the location and structure of the TEJ. These results were used to explain the significant differences in the zonal location of the TEJ in the years 1988 and 2002. To understand the contribution of the latitudinal location of latent heating on the strength of the TEJ, aqua-planet simulations were carried out. It has been shown that for similar amounts of net latent heating, the jet is stronger when heating is in the higher tropical latitudes. This may partly explain the reason for the jet to be very strong during the JJA monsoon season.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper describes the sensitivity of the simulated precipitation to changes in convective relaxation time scale (TAU) of Zhang and McFarlane (ZM) cumulus parameterization, in NCAR-Community Atmosphere Model version 3 (CAM3). In the default configuration of the model, the prescribed value of TAU, a characteristic time scale with which convective available potential energy (CAPE) is removed at an exponential rate by convection, is assumed to be 1 h. However, some recent observational findings suggest that, it is larger by around one order of magnitude. In order to explore the sensitivity of the model simulation to TAU, two model frameworks have been used, namely, aqua-planet and actual-planet configurations. Numerical integrations have been carried out by using different values of TAU, and its effect on simulated precipitation has been analyzed. The aqua-planet simulations reveal that when TAU increases, rate of deep convective precipitation (DCP) decreases and this leads to an accumulation of convective instability in the atmosphere. Consequently, the moisture content in the lower-and mid-troposphere increases. On the other hand, the shallow convective precipitation (SCP) and large-scale precipitation (LSP) intensify, predominantly the SCP, and thus capping the accumulation of convective instability in the atmosphere. The total precipitation (TP) remains approximately constant, but the proportion of the three components changes significantly, which in turn alters the vertical distribution of total precipitation production. The vertical structure of moist heating changes from a vertically extended profile to a bottom heavy profile, with the increase of TAU. Altitude of the maximum vertical velocity shifts from upper troposphere to lower troposphere. Similar response was seen in the actual-planet simulations. With an increase in TAU from 1 h to 8 h, there was a significant improvement in the simulation of the seasonal mean precipitation. The fraction of deep convective precipitation was in much better agreement with satellite observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Near-wall structures in turbulent natural convection at Rayleigh numbers of $10^{10}$ to $10^{11}$ at A Schmidt number of 602 are visualized by a new method of driving the convection across a fine membrane using concentration differences of sodium chloride. The visualizations show the near-wall flow to consist of sheet plumes. A wide variety of large-scale flow cells, scaling with the cross-section dimension, are observed. Multiple large-scale flow cells are seen at aspect ratio (AR)= 0.65, while only a single circulation cell is detected at AR= 0.435. The cells (or the mean wind) are driven by plumes coming together to form columns of rising lighter fluid. The wind in turn aligns the sheet plumes along the direction of shear. the mean wind direction is seen to change with time. The near-wall dynamics show plumes initiated at points, which elongate to form sheets and then merge. Increase in rayleigh number results in a larger number of closely and regularly spaced plumes. The plume spacings show a common log–normal probability distribution function, independent of the rayleigh number and the aspect ratio. We propose that the near-wall structure is made of laminar natural-convection boundary layers, which become unstable to give rise to sheet plumes, and show that the predictions of a model constructed on this hypothesis match the experiments. Based on these findings, we conclude that in the presence of a mean wind, the local near-wall boundary layers associated with each sheet plume in high-rayleigh-number turbulent natural convection are likely to be laminar mixed convection type.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In data mining, an important goal is to generate an abstraction of the data. Such an abstraction helps in reducing the space and search time requirements of the overall decision making process. Further, it is important that the abstraction is generated from the data with a small number of disk scans. We propose a novel data structure, pattern count tree (PC-tree), that can be built by scanning the database only once. PC-tree is a minimal size complete representation of the data and it can be used to represent dynamic databases with the help of knowledge that is either static or changing. We show that further compactness can be achieved by constructing the PC-tree on segmented patterns. We exploit the flexibility offered by rough sets to realize a rough PC-tree and use it for efficient and effective rough classification. To be consistent with the sizes of the branches of the PC-tree, we use upper and lower approximations of feature sets in a manner different from the conventional rough set theory. We conducted experiments using the proposed classification scheme on a large-scale hand-written digit data set. We use the experimental results to establish the efficacy of the proposed approach. (C) 2002 Elsevier Science B.V. All rights reserved.