265 resultados para Efficient elliptic curve arithmetic
Resumo:
A mild, environmentally friendly method for reduction of aromatic nitro group to amine is reported, using zinc powder in aqueous solutions of chelating ethers. The donor ether acts as a ligand and also serves as a co-solvent. Water is the proton source. This procedure is also a new method for the activation of zinc for electron transfer reduction of aromatic nitro compounds. The reduction is accomplished in a neutral medium and other reducing groups remained unaffected. The ethers used are dioxolane, 1,4-dioxane, ethoxymethoxyethane, dimethoxymethane, 1,2-dimethoxyethane, and diglyme.
Resumo:
The three-component chiral derivatization protocols have been developed for H-1, C-13 and F-19 NMR spectroscopic discrimination of chiral diacids by their coordination and self-assembly with optically active (R)-alpha-methylbenzylamine and 2-formylphenylboronic acid or 3-fluoro-2-formylmethylboronic acid. These protocols yield a mixture of diastereomeric imino-boronate esters which are identified by the well-resolved diastereotopic peaks with significant chemical shift differences ranging up to 0.6 and 2.1 ppm in their corresponding H-1 and F-19 NMR spectra, without any racemization or kinetic resolution, thereby enabling the determination of enantiopurity. A protocol has also been developed for discrimination of chiral alpha-methyl amines, using optically pure trans-1,2-cyclohexanedicarboxylic acid in combination with 2-formylphenylboronic acid or 3-fluoro-2-fluoromethylboronic acid. The proposed strategies have been demonstrated on large number of chiral diacids and chiral alpha-methyl amines.
Resumo:
We reconsider standard uniaxial fatigue test data obtained from handbooks. Many S-N curve fits to such data represent the median life and exclude load-dependent variance in life. Presently available approaches for incorporating probabilistic aspects explicitly within the S-N curves have some shortcomings, which we discuss. We propose a new linear S-N fit with a prespecified failure probability, load-dependent variance, and reasonable behavior at extreme loads. We fit our parameters using maximum likelihood, show the reasonableness of the fit using Q-Q plots, and obtain standard error estimates via Monte Carlo simulations. The proposed fitting method may be used for obtaining S-N curves from the same data as already available, with the same mathematical form, but in cases in which the failure probability is smaller, say, 10 % instead of 50 %, and in which the fitted line is not parallel to the 50 % (median) line.
Resumo:
ZnO nanoparticles (ZnO NPs) prepared by microwave heating technique are used to modify a gold electrode (ZnO/Au) for the hydrazine detection study. The synthesized product is well characterized by various techniques. Detailed electrochemical investigation of the oxidation of hydrazine on the ZnO/Au electrode in 0.02 M phosphate buffer solution (PBS) of pH 7.4 was carried out. A very low detection limit of 66 nM (S/N=4) and a wide linearity in current for a concentration range from 66.0X10-3 to 415 mu M was achieved by amperometry. The electrode was found to be stable for over a month when preserved in PBS.
Resumo:
4,5-Dihydroisoxazoles continue to attract considerable interest due to their wide spread biological activities. Here, we identify an efficient protocol for the preparation of 4,5-dihydroisoxazoles (2-isaxazolines) (4a-g) from quinolinyl chalcones. The nucleolytic activities of synthesized compounds were investigated by agarose gel electrophoresis. All these compounds were showed the remarkable DNA cleavage activity (concentration dependent) with pUC19 DNA at 365 nm UV light. The DNA cleavage activity was significantly enhanced by the presence of iminyl and carboxy radicals of DIQ. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
In this work the field emission studies of a new type of field emitter, zinc oxide (ZnO) core/graphitic (g-C) shell nanowires are presented. The nanowires are synthesized by chemical vapor deposition of zinc acetate at 1300 degrees C Scanning and transmission electron microscopy characterization confirm high aspect ratio and novel core-shell morphology of the nanowires. Raman spectrum of the nanowires mat represents the characteristic Raman modes from g-C shell as well as from the ZnO core. A low turn on field of 2.75 V/mu m and a high current density of 1.0 mA/cm(2) at 4.5 V/mu m for ZnO/g-C nanowires ensure the superior field emission behavior compared to the bare ZnO nanowires. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Herein, a new aromatic carboxylate ligand, namely, 4-(dipyridin-2-yl)aminobenzoic acid (HL), has been designed and employed for the construction of a series of lanthanide complexes (Eu3+ = 1, Tb3+ = 2, and Gd3+ = 3). Complexes of 1 and 2 were structurally authenticated by single-crystal X-ray diffraction and were found to exist as infinite 1D coordination polymers with the general formulas {Eu(L)(3)(H2O)(2)]}(n) (1) and {Tb(L)(3)(H2O)]center dot(H2O)}(n) (2). Both compounds crystallize in monoclinic space group C2/c. The photophysical properties demonstrated that the developed 4-(dipyridin-2-yl)aminobenzoate ligand is well suited for the sensitization of Tb3+ emission (Phi(overall) = 64%) thanks to the favorable position of the triplet state ((3)pi pi*) of the ligand the energy difference between the triplet state of the ligand and the excited state of Tb3+ (Delta E) = (3)pi pi* - D-5(4) = 3197 cm(-1)], as investigated in the Gd3+ complex. On the other hand, the corresponding Eu3+ complex shows weak luminescence efficiency (Phi(overall) = 7%) due to poor matching of the triplet state of the ligand with that of the emissive excited states of the metal ion (Delta E = (3)pi pi* - D-5(0) = 6447 cm(-1)). Furthermore, in the present work, a mixed lanthanide system featuring Eu3+ and Tb3+ ions with the general formula {Eu0.5Tb0.5(L)(3)(H2O)(2)]}(n) (4) was also synthesized, and the luminescent properties were evaluated and compared with those of the analogous single-lanthanide-ion systems (1 and 2). The lifetime measurements for 4 strongly support the premise that efficient energy transfer occurs between Tb3+ and Eu3+ in a mixed lanthanide system (eta = 86%).
Resumo:
Structural Support Vector Machines (SSVMs) have become a popular tool in machine learning for predicting structured objects like parse trees, Part-of-Speech (POS) label sequences and image segments. Various efficient algorithmic techniques have been proposed for training SSVMs for large datasets. The typical SSVM formulation contains a regularizer term and a composite loss term. The loss term is usually composed of the Linear Maximum Error (LME) associated with the training examples. Other alternatives for the loss term are yet to be explored for SSVMs. We formulate a new SSVM with Linear Summed Error (LSE) loss term and propose efficient algorithms to train the new SSVM formulation using primal cutting-plane method and sequential dual coordinate descent method. Numerical experiments on benchmark datasets demonstrate that the sequential dual coordinate descent method is faster than the cutting-plane method and reaches the steady-state generalization performance faster. It is thus a useful alternative for training SSVMs when linear summed error is used.
Resumo:
Introduction of processor based instruments in power systems is resulting in the rapid growth of the measured data volume. The present practice in most of the utilities is to store only some of the important data in a retrievable fashion for a limited period. Subsequently even this data is either deleted or stored in some back up devices. The investigations presented here explore the application of lossless data compression techniques for the purpose of archiving all the operational data - so that they can be put to more effective use. Four arithmetic coding methods suitably modified for handling power system steady state operational data are proposed here. The performance of the proposed methods are evaluated using actual data pertaining to the Southern Regional Grid of India. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
A newly implemented G-matrix Fourier transform (GFT) (4,3)D HC(C)CH experiment is presented in conjunction with (4,3)D HCCH to efficiently identify H-1/C-13 sugar spin systems in C-13 labeled nucleic acids. This experiment enables rapid collection of highly resolved relay 4D HC(C)CH spectral information, that is, shift correlations of C-13-H-1 groups separated by two carbon bonds. For RNA, (4,3)D HC(C)CH takes advantage of the comparably favorable 1'- and 3'-CH signal dispersion for complete spin system identification including 5'-CH. The (4,3)D HC(C)CH/HCCH based strategy is exemplified for the 30-nucleotide 3'-untranslated region of the pre-mRNA of human U1A protein.
Resumo:
We present external memory data structures for efficiently answering range-aggregate queries. The range-aggregate problem is defined as follows: Given a set of weighted points in R-d, compute the aggregate of the weights of the points that lie inside a d-dimensional orthogonal query rectangle. The aggregates we consider in this paper include COUNT, sum, and MAX. First, we develop a structure for answering two-dimensional range-COUNT queries that uses O(N/B) disk blocks and answers a query in O(log(B) N) I/Os, where N is the number of input points and B is the disk block size. The structure can be extended to obtain a near-linear-size structure for answering range-sum queries using O(log(B) N) I/Os, and a linear-size structure for answering range-MAX queries in O(log(B)(2) N) I/Os. Our structures can be made dynamic and extended to higher dimensions. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper we study the problem of designing SVM classifiers when the kernel matrix, K, is affected by uncertainty. Specifically K is modeled as a positive affine combination of given positive semi definite kernels, with the coefficients ranging in a norm-bounded uncertainty set. We treat the problem using the Robust Optimization methodology. This reduces the uncertain SVM problem into a deterministic conic quadratic problem which can be solved in principle by a polynomial time Interior Point (IP) algorithm. However, for large-scale classification problems, IP methods become intractable and one has to resort to first-order gradient type methods. The strategy we use here is to reformulate the robust counterpart of the uncertain SVM problem as a saddle point problem and employ a special gradient scheme which works directly on the convex-concave saddle function. The algorithm is a simplified version of a general scheme due to Juditski and Nemirovski (2011). It achieves an O(1/T-2) reduction of the initial error after T iterations. A comprehensive empirical study on both synthetic data and real-world protein structure data sets show that the proposed formulations achieve the desired robustness, and the saddle point based algorithm outperforms the IP method significantly.
Resumo:
Points-to analysis is a key compiler analysis. Several memory related optimizations use points-to information to improve their effectiveness. Points-to analysis is performed by building a constraint graph of pointer variables and dynamically updating it to propagate more and more points-to information across its subset edges. So far, the structure of the constraint graph has been only trivially exploited for efficient propagation of information, e.g., in identifying cyclic components or to propagate information in topological order. We perform a careful study of its structure and propose a new inclusion-based flow-insensitive context-sensitive points-to analysis algorithm based on the notion of dominant pointers. We also propose a new kind of pointer-equivalence based on dominant pointers which provides significantly more opportunities for reducing the number of pointers tracked during the analysis. Based on this hitherto unexplored form of pointer-equivalence, we develop a new context-sensitive flow-insensitive points-to analysis algorithm which uses incremental dominator update to efficiently compute points-to information. Using a large suite of programs consisting of SPEC 2000 benchmarks and five large open source programs we show that our points-to analysis is 88% faster than BDD-based Lazy Cycle Detection and 2x faster than Deep Propagation. We argue that our approach of detecting dominator-based pointer-equivalence is a key to improve points-to analysis efficiency.
Resumo:
Using the spectral multiplicities of the standard torus, we endow the Laplace eigenspaces with Gaussian probability measures. This induces a notion of random Gaussian Laplace eigenfunctions on the torus (''arithmetic random waves''). We study the distribution of the nodal length of random eigenfunctions for large eigenvalues, and our primary result is that the asymptotics for the variance is nonuniversal. Our result is intimately related to the arithmetic of lattice points lying on a circle with radius corresponding to the energy.
Resumo:
Pervasive use of pointers in large-scale real-world applications continues to make points-to analysis an important optimization-enabler. Rapid growth of software systems demands a scalable pointer analysis algorithm. A typical inclusion-based points-to analysis iteratively evaluates constraints and computes a points-to solution until a fixpoint. In each iteration, (i) points-to information is propagated across directed edges in a constraint graph G and (ii) more edges are added by processing the points-to constraints. We observe that prioritizing the order in which the information is processed within each of the above two steps can lead to efficient execution of the points-to analysis. While earlier work in the literature focuses only on the propagation order, we argue that the other dimension, that is, prioritizing the constraint processing, can lead to even higher improvements on how fast the fixpoint of the points-to algorithm is reached. This becomes especially important as we prove that finding an optimal sequence for processing the points-to constraints is NP-Complete. The prioritization scheme proposed in this paper is general enough to be applied to any of the existing points-to analyses. Using the prioritization framework developed in this paper, we implement prioritized versions of Andersen's analysis, Deep Propagation, Hardekopf and Lin's Lazy Cycle Detection and Bloom Filter based points-to analysis. In each case, we report significant improvements in the analysis times (33%, 47%, 44%, 20% respectively) as well as the memory requirements for a large suite of programs, including SPEC 2000 benchmarks and five large open source programs.