992 resultados para fast preparation
Resumo:
In this work, we observe gate tunable negative differential conductance (NDC) and current saturation in single layer and bilayer graphene transistor at high source-drain field, which arise due to the interplay among (1) self-heating, (2) hot carrier injection, and (3) drain induced minority carrier injection. The magnitude of the NDC is found to be reduced for a bilayer, in agreement with its weaker carrier-optical phonon coupling and less efficient hot carrier injection. The contributions of different mechanisms to the observed results are decoupled through fast transient measurements with nanosecond resolution. The findings provide insights into high field transport in graphene. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4754103]
Resumo:
Real-time image reconstruction is essential for improving the temporal resolution of fluorescence microscopy. A number of unavoidable processes such as, optical aberration, noise and scattering degrade image quality, thereby making image reconstruction an ill-posed problem. Maximum likelihood is an attractive technique for data reconstruction especially when the problem is ill-posed. Iterative nature of the maximum likelihood technique eludes real-time imaging. Here we propose and demonstrate a compute unified device architecture (CUDA) based fast computing engine for real-time 3D fluorescence imaging. A maximum performance boost of 210x is reported. Easy availability of powerful computing engines is a boon and may accelerate to realize real-time 3D fluorescence imaging. Copyright 2012 Author(s). This article is distributed under a Creative Commons Attribution 3.0 Unported License. http://dx.doi.org/10.1063/1.4754604]
Resumo:
Protein structure comparison is essential for understanding various aspects of protein structure, function and evolution. It can be used to explore the structural diversity and evolutionary patterns of protein families. In view of the above, a new algorithm is proposed which performs faster protein structure comparison using the peptide backbone torsional angles. It is fast, robust, computationally less expensive and efficient in finding structural similarities between two different protein structures and is also capable of identifying structural repeats within the same protein molecule.
Resumo:
In a communication system in which K nodes communicate with a central sink node, the following problem of selection often occurs. Each node maintains a preference number called a metric, which is not known to other nodes. The sink node must find the `best' node with the largest metric. The local nature of the metrics requires the selection process to be distributed. Further, the selection needs to be fast in order to increase the fraction of time available for data transmission using the selected node and to handle time-varying environments. While several selection schemes have been proposed in the literature, each has its own shortcomings. We propose a novel, distributed selection scheme that generalizes the best features of the timer scheme, which requires minimal feedback but does not guarantee successful selection, and the splitting scheme, which requires more feedback but guarantees successful selection. The proposed scheme introduces several new ideas into the design of the timer and splitting schemes. It explicitly accounts for feedback overheads and guarantees selection of the best node. We analyze and optimize the performance of the scheme and show that it is scalable, reliable, and fast. We also present new insights about the optimal timer scheme.
Resumo:
In submitted research; nanocrystalline powders having elements Ni0.5Cu0.25Zn0.25Fe2 xInxO4 with varied amounts of indium ( x = 0.0, 0.1, 0.2, 0.3 and 0.4) were grown-up by modified citrate to nitrate alchemy. The realism of single phase cubic spinel creation of the synthesized ferrite samples was studied by the DTA-TGA, XRD, SEM, EDX, FT-IR, VSM and dielectric measurements. SEM was applied to inspect the morphological variations and EDX was used to determine the compositional mass ratios. The studies on the dielectric constant (epsilon'), dielectric loss (epsilon `'), loss tangent (tan delta), ac conductivity (sigma(ac)), resistive and reactive parts of the impedance analysis (Z' and Z `') at room temperature were also carried out. The saturation magnetizations (Ms) were determined using the vibrating sample magnetometer (VSM). Ms. decreased with the increase In3+ doping content, as Fe3+ of 5(mu B) ions are replaced by In3+ of 5 mu(B) ions. (C) 2012 Elsevier B. V. All rights reserved.
Resumo:
Edge-preserving smoothing is widely used in image processing and bilateral filtering is one way to achieve it. Bilateral filter is a nonlinear combination of domain and range filters. Implementing the classical bilateral filter is computationally intensive, owing to the nonlinearity of the range filter. In the standard form, the domain and range filters are Gaussian functions and the performance depends on the choice of the filter parameters. Recently, a constant time implementation of the bilateral filter has been proposed based on raisedcosine approximation to the Gaussian to facilitate fast implementation of the bilateral filter. We address the problem of determining the optimal parameters for raised-cosine-based constant time implementation of the bilateral filter. To determine the optimal parameters, we propose the use of Stein's unbiased risk estimator (SURE). The fast bilateral filter accelerates the search for optimal parameters by faster optimization of the SURE cost. Experimental results show that the SURE-optimal raised-cosine-based bilateral filter has nearly the same performance as the SURE-optimal standard Gaussian bilateral filter and the Oracle mean squared error (MSE)-based optimal bilateral filter.
Resumo:
Acoustic modeling using mixtures of multivariate Gaussians is the prevalent approach for many speech processing problems. Computing likelihoods against a large set of Gaussians is required as a part of many speech processing systems and it is the computationally dominant phase for Large Vocabulary Continuous Speech Recognition (LVCSR) systems. We express the likelihood computation as a multiplication of matrices representing augmented feature vectors and Gaussian parameters. The computational gain of this approach over traditional methods is by exploiting the structure of these matrices and efficient implementation of their multiplication. In particular, we explore direct low-rank approximation of the Gaussian parameter matrix and indirect derivation of low-rank factors of the Gaussian parameter matrix by optimum approximation of the likelihood matrix. We show that both the methods lead to similar speedups but the latter leads to far lesser impact on the recognition accuracy. Experiments on 1,138 work vocabulary RM1 task and 6,224 word vocabulary TIMIT task using Sphinx 3.7 system show that, for a typical case the matrix multiplication based approach leads to overall speedup of 46 % on RM1 task and 115 % for TIMIT task. Our low-rank approximation methods provide a way for trading off recognition accuracy for a further increase in computational performance extending overall speedups up to 61 % for RM1 and 119 % for TIMIT for an increase of word error rate (WER) from 3.2 to 3.5 % for RM1 and for no increase in WER for TIMIT. We also express pairwise Euclidean distance computation phase in Dynamic Time Warping (DTW) in terms of matrix multiplication leading to saving of approximately of computational operations. In our experiments using efficient implementation of matrix multiplication, this leads to a speedup of 5.6 in computing the pairwise Euclidean distances and overall speedup up to 3.25 for DTW.
Resumo:
The compatibility of the fast-tachocline scenario with a flux-transport dynamo model is explored. We employ a flux-transport dynamo model coupled with simple feedback formulae relating the thickness of the tachocline to the amplitude of the magnetic field or to the Maxwell stress. The dynamo model is found to be robust against the nonlinearity introduced by this simplified fast-tachocline mechanism. Solar-like butterfly diagrams are found to persist and, even without any parameter fitting, the overall thickness of the tachocline is well within the range admitted by helioseismic constraints. In the most realistic case of a time-and latitude-dependent tachocline thickness linked to the value of the Maxwell stress, both the thickness and its latitudinal dependence are in excellent agreement with seismic results. In nonparametric models, cycle-related temporal variations in tachocline thickness are somewhat larger than admitted by helioseismic constraints; we find, however, that introducing a further parameter into our feedback formula readily allows further fine tuning of the thickness variations.
Resumo:
Opportunistic selection is a practically appealing technique that is used in multi-node wireless systems to maximize throughput, implement proportional fairness, etc. However, selection is challenging since the information about a node's channel gains is often available only locally at each node and not centrally. We propose a novel multiple access-based distributed selection scheme that generalizes the best features of the timer scheme, which requires minimal feedback but does not always guarantee successful selection, and the fast splitting scheme, which requires more feedback but guarantees successful selection. The proposed scheme's design explicitly accounts for feedback time overheads unlike the conventional splitting scheme and guarantees selection of the user with the highest metric unlike the timer scheme. We analyze and minimize the average time including feedback required by the scheme to select. With feedback overheads, the proposed scheme is scalable and considerably faster than several schemes proposed in the literature. Furthermore, the gains increase as the feedback overhead increases.
Resumo:
Acoustic modeling using mixtures of multivariate Gaussians is the prevalent approach for many speech processing problems. Computing likelihoods against a large set of Gaussians is required as a part of many speech processing systems and it is the computationally dominant phase for LVCSR systems. We express the likelihood computation as a multiplication of matrices representing augmented feature vectors and Gaussian parameters. The computational gain of this approach over traditional methods is by exploiting the structure of these matrices and efficient implementation of their multiplication.In particular, we explore direct low-rank approximation of the Gaussian parameter matrix and indirect derivation of low-rank factors of the Gaussian parameter matrix by optimum approximation of the likelihood matrix. We show that both the methods lead to similar speedups but the latter leads to far lesser impact on the recognition accuracy. Experiments on a 1138 word vocabulary RM1 task using Sphinx 3.7 system show that, for a typical case the matrix multiplication approach leads to overall speedup of 46%. Both the low-rank approximation methods increase the speedup to around 60%, with the former method increasing the word error rate (WER) from 3.2% to 6.6%, while the latter increases the WER from 3.2% to 3.5%.
Resumo:
Decoding of linear space-time block codes (STBCs) with sphere-decoding (SD) is well known. A fast-version of the SD known as fast sphere decoding (FSD) has been recently studied by Biglieri, Hong and Viterbo. Viewing a linear STBC as a vector space spanned by its defining weight matrices over the real number field, we define a quadratic form (QF), called the Hurwitz-Radon QF (HRQF), on this vector space and give a QF interpretation of the FSD complexity of a linear STBC. It is shown that the FSD complexity is only a function of the weight matrices defining the code and their ordering, and not of the channel realization (even though the equivalent channel when SD is used depends on the channel realization) or the number of receive antennas. It is also shown that the FSD complexity is completely captured into a single matrix obtained from the HRQF. Moreover, for a given set of weight matrices, an algorithm to obtain a best ordering of them leading to the least FSD complexity is presented. The well known classes of low FSD complexity codes (multi-group decodable codes, fast decodable codes and fast group decodable codes) are presented in the framework of HRQF.
Resumo:
Channel-aware assignment of sub-channels to users in the downlink of an OFDMA system demands extensive feedback of channel state information (CSI) to the base station. Since the feedback bandwidth is often very scarce, schemes that limit feedback are necessary. We develop a novel, low feedback splitting-based algorithm for assigning each sub-channel to its best user, i.e., the user with the highest gain for that sub-channel among all users. The key idea behind the algorithm is that, at any time, each user contends for the sub-channel on which it has the largest channel gain among the unallocated sub-channels. Unlike other existing schemes, the algorithm explicitly handles multiple access control aspects associated with the feedback of CSI. A tractable asymptotic analysis of a system with a large number of users helps design the algorithm. It yields 50% to 65% throughput gains compared to an asymptotically optimal one-bit feedback scheme, when the number of users is as small as 10 or as large as 1000. The algorithm is fast and distributed, and scales with the number of users.
Resumo:
The present investigation reports the preparation of freestanding nanocrystalline Zn by combined mechanical milling at cryogenic and room temperatures. The cryomilling is used as an effective means of rapid fracturing. The detailed scanning electron microscopy and transmission electron microscopy observations indicate that the minimum crystallite size is 6 +/- A 2 nm after 3 hours of cryomilling. The crystallite size increases to 30 +/- A 2 nm after 3 hours of room temperature milling of the cryomilled powder due to deformation-induced sintering. Detailed theoretical analysis allows us to obtain a diagram of size of the nanoparticles formed vs temperature to explain the experimental findings.
Resumo:
In this paper we present a hardware-software hybrid technique for modular multiplication over large binary fields. The technique involves application of Karatsuba-Ofman algorithm for polynomial multiplication and a novel technique for reduction. The proposed reduction technique is based on the popular repeated multiplication technique and Barrett reduction. We propose a new design of a parallel polynomial multiplier that serves as a hardware accelerator for large field multiplications. We show that the proposed reduction technique, accelerated using the modified polynomial multiplier, achieves significantly higher performance compared to a purely software technique and other hybrid techniques. We also show that the hybrid accelerated approach to modular field multiplication is significantly faster than the Montgomery algorithm based integrated multiplication approach.
Resumo:
Interconnected Os nanochains consisting of ultrafine particles prepared using a simple procedure yield a coupled surface plasmon peak in the visible region and can be used as substrates for surface enhanced Raman scattering of various analytes.