49 resultados para large scale linear system


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. Malleable applications, where the number of processors on which the applications execute can be changed during executions, can make use of their malleability to better tolerate high failure rates. We present AdFT, an adaptive fault tolerance framework for long running malleable applications to maximize application performance in the presence of failures. AdFT framework includes cost models for evaluating the benefits of various fault tolerance actions including checkpointing, live-migration and rescheduling, and runtime decisions for dynamically selecting the fault tolerance actions at different points of application execution to maximize performance. Simulations with real and synthetic failure traces show that our approach outperforms existing fault tolerance mechanisms for malleable applications yielding up to 23% improvement in application performance, and is effective even for petascale systems and beyond.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report on the large scale synthesis of millimetre long buckled multiwalled carbon nanotubes by one-step pyrolysis. Current carrying capability of a highly buckled region is shown to be more as compared to a less buckled region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The role of crystallite size and clustering in influencing the stability of the structures of a large tetragonality ferroelectric system 0.6BiFeO(3)-0.4PbTiO(3) was investigated. The system exhibits cubic phase for a crystallite size similar to 25 nm, three times larger than the critical size reported for one of its end member PbTiO3. With increased degree of clustering for the same average crystallite size, partial stabilization of the ferroelectric tetragonal phase takes place. The results suggest that clustering helps in reducing the depolarization energy without the need for increasing the crystallite size of free particles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mycobacterium tuberculosis owes its high pathogenic potential to its ability to evade host immune responses and thrive inside the macrophage. The outcome of infection is largely determined by the cellular response comprising a multitude of molecular events. The complexity and inter-relatedness in the processes makes it essential to adopt systems approaches to study them. In this work, we construct a comprehensive network of infection-related processes in a human macrophage comprising 1888 proteins and 14,016 interactions. We then compute response networks based on available gene expression profiles corresponding to states of health, disease and drug treatment. We use a novel formulation for mining response networks that has led to identifying highest activities in the cell. Highest activity paths provide mechanistic insights into pathogenesis and response to treatment. The approach used here serves as a generic framework for mining dynamic changes in genome-scale protein interaction networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a low-complexity algorithm based on Markov chain Monte Carlo (MCMC) technique for signal detection on the uplink in large scale multiuser multiple input multiple output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and similar number of uplink users. The algorithm employs a randomized sampling method (which makes a probabilistic choice between Gibbs sampling and random sampling in each iteration) for detection. The proposed algorithm alleviates the stalling problem encountered at high SNRs in conventional MCMC algorithm and achieves near-optimal performance in large systems with M-QAM. A novel ingredient in the algorithm that is responsible for achieving near-optimal performance at low complexities is the joint use of a randomized MCMC (R-MCMC) strategy coupled with a multiple restart strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for large number of BS antennas and users (e.g., 64, 128, 256 BS antennas/users).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we propose a multiple-input multiple-output (MIMO) receiver algorithm that exploits channel hardening that occurs in large MIMO channels. Channel hardening refers to the phenomenon where the off-diagonal terms of the matrix become increasingly weaker compared to the diagonal terms as the size of the channel gain matrix increases. Specifically, we propose a message passing detection (MPD) algorithm which works with the real-valued matched filtered received vector (whose signal term becomes, where is the transmitted vector), and uses a Gaussian approximation on the off-diagonal terms of the matrix. We also propose a simple estimation scheme which directly obtains an estimate of (instead of an estimate of), which is used as an effective channel estimate in the MPD algorithm. We refer to this receiver as the channel hardening-exploiting message passing (CHEMP) receiver. The proposed CHEMP receiver achieves very good performance in large-scaleMIMO systems (e.g., in systems with 16 to 128 uplink users and 128 base station antennas). For the considered large MIMO settings, the complexity of the proposed MPD algorithm is almost the same as or less than that of the minimum mean square error (MMSE) detection. This is because the MPD algorithm does not need a matrix inversion. It also achieves a significantly better performance compared to MMSE and other message passing detection algorithms using MMSE estimate of. Further, we design optimized irregular low density parity check (LDPC) codes specific to the considered large MIMO channel and the CHEMP receiver through EXIT chart matching. The LDPC codes thus obtained achieve improved coded bit error rate performance compared to off-the-shelf irregular LDPC codes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The authors report a detailed investigation of the flicker noise (1/f noise) in graphene films obtained from chemical vapour deposition (CVD) and chemical reduction of graphene oxide. The authors find that in the case of polycrystalline graphene films grown by CVD, the grain boundaries and other structural defects are the dominant source of noise by acting as charged trap centres resulting in huge increase in noise as compared with that of exfoliated graphene. A study of the kinetics of defects in hydrazine-reduced graphene oxide (RGO) films as a function of the extent of reduction showed that for longer hydrazine treatment time strong localised crystal defects are introduced in RGO, whereas the RGO with shorter hydrazine treatment showed the presence of large number of mobile defects leading to higher noise amplitude.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, using idealized climate model simulations, we investigate the biogeophysical effects of large-scale deforestation on monsoon regions. We find that the remote forcing from large-scale deforestation in the northern middle and high latitudes shifts the Intertropical Convergence Zone southward. This results in a significant decrease in precipitation in the Northern Hemisphere monsoon regions (East Asia, North America, North Africa, and South Asia) and moderate precipitation increases in the Southern Hemisphere monsoon regions (South Africa, South America, and Australia). The magnitude of the monsoonal precipitation changes depends on the location of deforestation, with remote effects showing a larger influence than local effects. The South Asian Monsoon region is affected the most, with 18% decline in precipitation over India. Our results indicate that any comprehensive assessment of afforestation/reforestation as climate change mitigation strategies should carefully evaluate the remote effects on monsoonal precipitation alongside the large local impacts on temperatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial modulation (SM) is attractive for multiantenna wireless communications. SM uses multiple transmit antenna elements but only one transmit radio frequency (RF) chain. In SM, in addition to the information bits conveyed through conventional modulation symbols (e.g., QAM), the index of the active transmit antenna also conveys information bits. In this paper, we establish that SM has significant signal-to-noise (SNR) advantage over conventional modulation in large-scale multiuser (multiple-input multiple-output) MIMO systems. Our new contribution in this paper addresses the key issue of large-dimension signal processing at the base station (BS) receiver (e.g., signal detection) in large-scale multiuser SM-MIMO systems, where each user is equipped with multiple transmit antennas (e.g., 2 or 4 antennas) but only one transmit RF chain, and the BS is equipped with tens to hundreds of (e.g., 128) receive antennas. Specifically, we propose two novel algorithms for detection of large-scale SM-MIMO signals at the BS; one is based on message passing and the other is based on local search. The proposed algorithms achieve very good performance and scale well. For the same spectral efficiency, multiuser SM-MIMO outperforms conventional multiuser MIMO (recently being referred to as massive MIMO) by several dBs. The SNR advantage of SM-MIMO over massive MIMO can be attributed to: (i) because of the spatial index bits, SM-MIMO can use a lower-order QAM alphabet compared to that in massive MIMO to achieve the same spectral efficiency, and (ii) for the same spectral efficiency and QAM size, massive MIMO will need more spatial streams per user which leads to increased spatial interference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Exascale systems of the future are predicted to have mean time between failures (MTBF) of less than one hour. At such low MTBFs, employing periodic checkpointing alone will result in low efficiency because of the high number of application failures resulting in large amount of lost work due to rollbacks. In such scenarios, it is highly necessary to have proactive fault tolerance mechanisms that can help avoid significant number of failures. In this work, we have developed a mechanism for proactive fault tolerance using partial replication of a set of application processes. Our fault tolerance framework adaptively changes the set of replicated processes periodically based on failure predictions to avoid failures. We have developed an MPI prototype implementation, PAREP-MPI that allows changing the replica set. We have shown that our strategy involving adaptive process replication significantly outperforms existing mechanisms providing up to 20 percent improvement in application efficiency even for exascale systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generalized spatial modulation (GSM) uses n(t) transmit antenna elements but fewer transmit radio frequency (RF) chains, n(rf). Spatial modulation (SM) and spatial multiplexing are special cases of GSM with n(rf) = 1 and n(rf) = n(t), respectively. In GSM, in addition to conveying information bits through n(rf) conventional modulation symbols (for example, QAM), the indices of the n(rf) active transmit antennas also convey information bits. In this paper, we investigate GSM for large-scale multiuser MIMO communications on the uplink. Our contributions in this paper include: 1) an average bit error probability (ABEP) analysis for maximum-likelihood detection in multiuser GSM-MIMO on the uplink, where we derive an upper bound on the ABEP, and 2) low-complexity algorithms for GSM-MIMO signal detection and channel estimation at the base station receiver based on message passing. The analytical upper bounds on the ABEP are found to be tight at moderate to high signal-to-noise ratios (SNR). The proposed receiver algorithms are found to scale very well in complexity while achieving near-optimal performance in large dimensions. Simulation results show that, for the same spectral efficiency, multiuser GSM-MIMO can outperform multiuser SM-MIMO as well as conventional multiuser MIMO, by about 2 to 9 dB at a bit error rate of 10(-3). Such SNR gains in GSM-MIMO compared to SM-MIMO and conventional MIMO can be attributed to the fact that, because of a larger number of spatial index bits, GSM-MIMO can use a lower-order QAM alphabet which is more power efficient.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that the removal of angular momentum is possible in the presence of large-scale magnetic stresses in geometrically thick, advective, sub-Keplerian accretion flows around black holes in steady state, in the complete absence of alpha-viscosity. The efficiency of such an angular momentum transfer could be equivalent to that of alpha-viscosity with alpha = 0.01-0.08. Nevertheless, the required field is well below its equipartition value, leading to a magnetically stable disk flow. This is essentially important in order to describe the hard spectral state of the sources when the flow is non/sub-Keplerian. We show in our simpler 1.5 dimensional, vertically averaged disk model that the larger the vertical-gradient of the azimuthal component of the magnetic field is, the stronger the rate of angular momentum transfer becomes, which in turn may lead to a faster rate of outflowing matter. Finding efficient angular momentum transfer in black hole disks via magnetic stresses alone, is very interesting when the generic origin of alpha-viscosity is still being explored.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite significant advances in recent years, structure-from-motion (SfM) pipelines suffer from two important drawbacks. Apart from requiring significant computational power to solve the large-scale computations involved, such pipelines sometimes fail to correctly reconstruct when the accumulated error in incremental reconstruction is large or when the number of 3D to 2D correspondences are insufficient. In this paper we present a novel approach to mitigate the above-mentioned drawbacks. Using an image match graph based on matching features we partition the image data set into smaller sets or components which are reconstructed independently. Following such reconstructions we utilise the available epipolar relationships that connect images across components to correctly align the individual reconstructions in a global frame of reference. This results in both a significant speed up of at least one order of magnitude and also mitigates the problems of reconstruction failures with a marginal loss in accuracy. The effectiveness of our approach is demonstrated on some large-scale real world data sets.