990 resultados para Efficient elliptic curve arithmetic
Resumo:
Introduction of processor based instruments in power systems is resulting in the rapid growth of the measured data volume. The present practice in most of the utilities is to store only some of the important data in a retrievable fashion for a limited period. Subsequently even this data is either deleted or stored in some back up devices. The investigations presented here explore the application of lossless data compression techniques for the purpose of archiving all the operational data - so that they can be put to more effective use. Four arithmetic coding methods suitably modified for handling power system steady state operational data are proposed here. The performance of the proposed methods are evaluated using actual data pertaining to the Southern Regional Grid of India. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
A newly implemented G-matrix Fourier transform (GFT) (4,3)D HC(C)CH experiment is presented in conjunction with (4,3)D HCCH to efficiently identify H-1/C-13 sugar spin systems in C-13 labeled nucleic acids. This experiment enables rapid collection of highly resolved relay 4D HC(C)CH spectral information, that is, shift correlations of C-13-H-1 groups separated by two carbon bonds. For RNA, (4,3)D HC(C)CH takes advantage of the comparably favorable 1'- and 3'-CH signal dispersion for complete spin system identification including 5'-CH. The (4,3)D HC(C)CH/HCCH based strategy is exemplified for the 30-nucleotide 3'-untranslated region of the pre-mRNA of human U1A protein.
Resumo:
We present external memory data structures for efficiently answering range-aggregate queries. The range-aggregate problem is defined as follows: Given a set of weighted points in R-d, compute the aggregate of the weights of the points that lie inside a d-dimensional orthogonal query rectangle. The aggregates we consider in this paper include COUNT, sum, and MAX. First, we develop a structure for answering two-dimensional range-COUNT queries that uses O(N/B) disk blocks and answers a query in O(log(B) N) I/Os, where N is the number of input points and B is the disk block size. The structure can be extended to obtain a near-linear-size structure for answering range-sum queries using O(log(B) N) I/Os, and a linear-size structure for answering range-MAX queries in O(log(B)(2) N) I/Os. Our structures can be made dynamic and extended to higher dimensions. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.
Resumo:
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.
Resumo:
Using the spectral multiplicities of the standard torus, we endow the Laplace eigenspaces with Gaussian probability measures. This induces a notion of random Gaussian Laplace eigenfunctions on the torus (''arithmetic random waves''). We study the distribution of the nodal length of random eigenfunctions for large eigenvalues, and our primary result is that the asymptotics for the variance is nonuniversal. Our result is intimately related to the arithmetic of lattice points lying on a circle with radius corresponding to the energy.
Resumo:
Pervasive use of pointers in large-scale real-world applications continues to make points-to analysis an important optimization-enabler. Rapid growth of software systems demands a scalable pointer analysis algorithm. A typical inclusion-based points-to analysis iteratively evaluates constraints and computes a points-to solution until a fixpoint. In each iteration, (i) points-to information is propagated across directed edges in a constraint graph G and (ii) more edges are added by processing the points-to constraints. We observe that prioritizing the order in which the information is processed within each of the above two steps can lead to efficient execution of the points-to analysis. While earlier work in the literature focuses only on the propagation order, we argue that the other dimension, that is, prioritizing the constraint processing, can lead to even higher improvements on how fast the fixpoint of the points-to algorithm is reached. This becomes especially important as we prove that finding an optimal sequence for processing the points-to constraints is NP-Complete. The prioritization scheme proposed in this paper is general enough to be applied to any of the existing points-to analyses. Using the prioritization framework developed in this paper, we implement prioritized versions of Andersen's analysis, Deep Propagation, Hardekopf and Lin's Lazy Cycle Detection and Bloom Filter based points-to analysis. In each case, we report significant improvements in the analysis times (33%, 47%, 44%, 20% respectively) as well as the memory requirements for a large suite of programs, including SPEC 2000 benchmarks and five large open source programs.
Resumo:
The effectiveness of the last-level shared cache is crucial to the performance of a multi-core system. In this paper, we observe and make use of the DelinquentPC - Next-Use characteristic to improve shared cache performance. We propose a new PC-centric cache organization, NUcache, for the shared last level cache of multi-cores. NUcache logically partitions the associative ways of a cache set into MainWays and DeliWays. While all lines have access to the MainWays, only lines brought in by a subset of delinquent PCs, selected by a PC selection mechanism, are allowed to enter the DeliWays. The PC selection mechanism is an intelligent cost-benefit analysis based algorithm that utilizes Next-Use information to select the set of PCs that can maximize the hits experienced in DeliWays. Performance evaluation reveals that NUcache improves the performance over a baseline design by 9.6%, 30% and 33% respectively for dual, quad and eight core workloads comprised of SPEC benchmarks. We also show that NUcache is more effective than other well-known cache-partitioning algorithms.
Resumo:
Environment-friendly management of fruit flies involving pheromones is useful in reducing the undesirable pest populations responsible for decreasing the yield and the crop quality. A nanogel has been prepared from a pheromone, methyl eugenol (ME) using a low-molecular mass gelator. This was very stable at open ambient conditions and slowed down the evaporation of pheromone significantly. This enabled its easy handling and transportation without refrigeration, and reduction in the frequency of pheromone recharging in the orchard. Notably the involvement of the nano-gelled pheromone brought about an effective management of Bactrocera dorsalis, a prevalent harmful pest for a number of fruits including guava. Thus a simple, practical and low cost green chemical approach is developed that has a significant potential for crop protection, long lasting residual activity, excellent efficacy and favorable safety profiles. This makes the present invention well-suited for pest management in a variety of crops.
Resumo:
We have demonstrated that cadmium deoxycholate (1), a Cd-salt, provides a convenient and inexpensive route to high quality CdSe nanocrystals with photoluminescence (PL) in the blue to red region of the visible spectrum, with reproducible quantum yields as high as similar to 47%. Owing to the high thermal stability of the bile acid based cadmium precursor (decomposition point: 332 degrees C), it was possible to achieve high injection and growth temperatures (similar to 300 degrees C) for the nanocrystals, which was essential for obtaining larger CdSe nanocrystals emitting in the red region (625-650 nm) with a sharp full width at half maximum (FWHM) (23 nm) and multiple (6-7) excitonic absorption features. The as-prepared CdSe nanocrystals synthesized from cadmium deoxycholate represent a series of highly efficient emitters with pure colours and controllable sizes, shapes and structures.
Resumo:
Network Intrusion Detection Systems (NIDS) intercept the traffic at an organization's network periphery to thwart intrusion attempts. Signature-based NIDS compares the intercepted packets against its database of known vulnerabilities and malware signatures to detect such cyber attacks. These signatures are represented using Regular Expressions (REs) and strings. Regular Expressions, because of their higher expressive power, are preferred over simple strings to write these signatures. We present Cascaded Automata Architecture to perform memory efficient Regular Expression pattern matching using existing string matching solutions. The proposed architecture performs two stage Regular Expression pattern matching. We replace the substring and character class components of the Regular Expression with new symbols. We address the challenges involved in this approach. We augment the Word-based Automata, obtained from the re-written Regular Expressions, with counter-based states and length bound transitions to perform Regular Expression pattern matching. We evaluated our architecture on Regular Expressions taken from Snort rulesets. We were able to reduce the number of automata states between 50% to 85%. Additionally, we could reduce the number of transitions by a factor of 3 leading to further reduction in the memory requirements.
Resumo:
This paper investigates a new approach for point matching in multi-sensor satellite images. The feature points are matched using multi-objective optimization (angle criterion and distance condition) based on Genetic Algorithm (GA). This optimization process is more efficient as it considers both the angle criterion and distance condition to incorporate multi-objective switching in the fitness function. This optimization process helps in matching three corresponding corner points detected in the reference and sensed image and thereby using the affine transformation, the sensed image is aligned with the reference image. From the results obtained, the performance of the image registration is evaluated and it is concluded that the proposed approach is efficient.
Resumo:
In recent years, there has been significant effort in the synthesis of nanocrystalline spinel ferrites due to their unique properties. Among them, zinc ferrite has been widely investigated for countless applications. As traditional ferrite synthesis methods are energy- and time-intensive, there is need for a resource-effective process that can prepare ferrites quickly and efficiently without compromising material quality. We report on a novel microwave-assisted soft-chemical synthesis technique in the liquid medium for synthesis of ZnFe2O4 powder below 100 °C, within 5 min. The use of β-diketonate precursors, featuring direct metal-to-oxygen bonds in their molecular structure, not only reduces process temperature and duration sharply, but also leads to water-soluble and non-toxic by-products. As synthesized powder is annealed at 300 °C for 2 hrs in a conventional anneal (CA) schedule. An alternative procedure, a 2-min rapid anneal at 300 °C (RA) is shown to be sufficient to crystallize the ferrite particles, which show a saturation magnetization (MS) of 38 emu/g, compared with 39 emu/g for a 2-hr CA. This signifies that our process is efficient enough to reduce energy consumption by ∼85% just by altering the anneal scheme. Recognizing the criticality of anneal process to the energy budget, a more energy-efficient variation of the reaction process was developed, which obviates the need for post-synthesis annealing altogether. It is shown that the process also can be employed to deposit crystalline thin films of ferrites.
Resumo:
In a cooperative system with an amplify-and-forward relay, the cascaded channel training protocol enables the destination to estimate the source-destination channel gain and the product of the source-relay (SR) and relay-destination (RD) channel gains using only two pilot transmissions from the source. Notably, the destination does not require a separate estimate of the SR channel. We develop a new expression for the symbol error probability (SEP) of AF relaying when imperfect channel state information (CSI) is acquired using the above training protocol. A tight SEP upper bound is also derived; it shows that full diversity is achieved, albeit at a high signal-to-noise ratio (SNR). Our analysis uses fewer simplifying assumptions, and leads to expressions that are accurate even at low SNRs and are different from those in the literature. For instance, it does not approximate the estimate of the product of SR and RD channel gains by the product of the estimates of the SR and RD channel gains. We show that cascaded channel estimation often outperforms a channel estimation protocol that incurs a greater training overhead by forwarding a quantized estimate of the SR channel gain to the destination. The extent of pilot power boosting, if allowed, that is required to improve performance is also quantified.
Resumo:
Training for receive antenna selection (AS) differs from that for conventional multiple antenna systems because of the limited hardware usage inherent in AS. We analyze and optimize the performance of a novel energy-efficient training method tailored for receive AS. In it, the transmitter sends not only pilots that enable the selection process, but also an extra pilot that leads to accurate channel estimates for the selected antenna that actually receives data. For time-varying channels, we propose a novel antenna selection rule and prove that it minimizes the symbol error probability (SEP). We also derive closed-form expressions for the SEP of MPSK, and show that the considered training method is significantly more energy-efficient than the conventional AS training method.