927 resultados para cryptographic pairing computation, elliptic curve cryptography
Resumo:
We introduce a family of Hamiltonian systems for measurement-based quantum computation with continuous variables. The Hamiltonians (i) are quadratic, and therefore two body, (ii) are of short range, (iii) are frustration-free, and (iv) possess a constant energy gap proportional to the squared inverse of the squeezing. Their ground states are the celebrated Gaussian graph states, which are universal resources for quantum computation in the limit of infinite squeezing. These Hamiltonians constitute the basic ingredient for the adiabatic preparation of graph states and thus open new venues for the physical realization of continuous-variable quantum computing beyond the standard optical approaches. We characterize the correlations in these systems at thermal equilibrium. In particular, we prove that the correlations across any multipartition are contained exactly in its boundary, automatically yielding a correlation area law. © 2011 American Physical Society.
Resumo:
Particle-in-cell (PIC) simulations of relativistic shocks are in principle capable of predicting the spectra of photons that are radiated incoherently by the accelerated particles. The most direct method evaluates the spectrum using the fields given by the Lienard-Wiechart potentials. However, for relativistic particles this procedure is computationally expensive. Here we present an alternative method that uses the concept of the photon formation length. The algorithm is suitable for evaluating spectra both from particles moving in a specific realization of a turbulent electromagnetic field or from trajectories given as a finite, discrete time series by a PIC simulation. The main advantage of the method is that it identifies the intrinsic spectral features and filters out those that are artifacts of the limited time resolution and finite duration of input trajectories.
Resumo:
As ubiquitous computing becomes a reality, sensitive information is increasingly processed and transmitted by smart cards, mobile devices and various types of embedded systems. This has led to the requirement of a new class of lightweight cryptographic algorithm to ensure security in these resource constrained environments. The International Organization for Standardization (ISO) has recently standardised two low-cost block ciphers for this purpose, Clefia and Present. In this paper we provide the first comprehensive hardware architecture comparison between these ciphers, as well as a comparison with the current National Institute of Standards and Technology (NIST) standard, the Advanced Encryption Standard.
Resumo:
Reactive power has become a vital resource in modern electricity networks due to increased penetration of distributed generation. This paper examines the extended reactive power capability of DFIGs to improve network stability and capability to manage network voltage profile during transient faults and dynamic operating conditions. A coordinated reactive power controller is designed by considering the reactive power capabilities of the rotor-side converter (RSC) and the grid-side converter (GSC) of the DFIG in order to maximise the reactive power support from DFIGs. The study has illustrated that, a significant reactive power contribution can be obtained from partially loaded DFIG wind farms for stability enhancement by using the proposed capability curve based reactive power controller; hence DFIG wind farms can function as vital dynamic reactive power resources for power utilities without commissioning additional dynamic reactive power devices. Several network adaptive droop control schemes are also proposed for network voltage management and their performance has been investigated during variable wind conditions. Furthermore, the influence of reactive power capability on network adaptive droop control strategies has been investigated and it has also been shown that enhanced reactive power capability of DFIGs can substantially improve the voltage control performance.
Resumo:
This paper proposes a method to assess the small signal stability of a power system network by selective determination of the modal eigenvalues. This uses an accelerating polynomial transform, designed using approximate eigenvalues
obtained from a wavelet approximation. Application to the IEEE 14 bus network model produced computational savings of 20%,over the QR algorithm.
Resumo:
This paper introduces an algorithm that calculates the dominant eigenvalues (in terms of system stability) of a linear model and neglects the exact computation of the non-dominant eigenvalues. The method estimates all of the eigenvalues using wavelet based compression techniques. These estimates are used to find a suitable invariant subspace such that projection by this subspace will provide one containing the eigenvalues of interest. The proposed algorithm is exemplified by application to a power system model.
Resumo:
A significant cold event, deduced from the Greenland ice cores, took place between 8200 and 8000 cal. BP. Modeling of the event suggests that higher northern latitudes would have also experienced considerable decreases in precipitation and that Ireland would have witnessed one of the greatest depressions. However, no well-dated proxy record exists from the British Isles to test the model results. Here we present independent evidence for a phase of major pine recruitment on Irish bogs at around 8150 cal. BP. Dendrochronological dating of subfossil trees from three sites reveal synchronicity in germination across the region, indicative of a regional forcing, and allows for high-precision radiocarbon based dating. The inner-rings of 40% of all samples from the north of Ireland dating to the period 8500-7500 cal. BP fall within a 25-yr window. The concurrent colonization of pine on peatland is interpreted as drier conditions in the region and provides the first substantive proxy data in support of a significant hydrological change in the north of Ireland accompanying the 8.2 ka event. The dating uncertainties associated with the Irish pine record and the Greenland Ice Core Chronology 2005 (GICC05) do not allow for any overlap between the two. Our results indicate that the discrepancy could be an artifact of dating inaccuracy, and support a similar claim by Lohne et al. (2013) for the Younger Dryas boundaries. If real, this asynchrony will most likely have affected interpretations of previous proxy alignments.
Resumo:
In wireless networks, the broadcast nature of the propagation medium makes the communication process vulnerable to malicious nodes (e.g. eavesdroppers) which are in the coverage area of the transmission. Thus, security issues play a vital role in wireless systems. Traditionally, information security has been addressed in the upper layers (e.g. the network layer) through the design of cryptographic protocols. Cryptography-based security aims to design a protocol such that it is computationally prohibitive for the eavesdropper to decode the information. The idea behind this approach relies on the limited computational power of the eavesdroppers. However, with advances in emerging hardware technologies, achieving secure communications relying on protocol-based mechanisms alone become insufficient. Owing to this fact, a new paradigm of secure communications has been shifted to implement the security at the physical layer. The key principle behind this strategy is to exploit the spatial-temporal characteristics of the wireless channel to guarantee secure data transmission without the need of cryptographic protocols.
Resumo:
In this paper we present a design methodology for algorithm/architecture co-design of a voltage-scalable, process variation aware motion estimator based on significance driven computation. The fundamental premise of our approach lies in the fact that all computations are not equally significant in shaping the output response of video systems. We use a statistical technique to intelligently identify these significant/not-so-significant computations at the algorithmic level and subsequently change the underlying architecture such that the significant computations are computed in an error free manner under voltage over-scaling. Furthermore, our design includes an adaptive quality compensation (AQC) block which "tunes" the algorithm and architecture depending on the magnitude of voltage over-scaling and severity of process variations. Simulation results show average power savings of similar to 33% for the proposed architecture when compared to conventional implementation in the 90 nm CMOS technology. The maximum output quality loss in terms of Peak Signal to Noise Ratio (PSNR) was similar to 1 dB without incurring any throughput penalty.
Resumo:
In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.
Resumo:
In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.
Resumo:
Cloud computing technology has rapidly evolved over the last decade, offering an alternative way to store and work with large amounts of data. However data security remains an important issue particularly when using a public cloud service provider. The recent area of homomorphic cryptography allows computation on encrypted data, which would allow users to ensure data privacy on the cloud and increase the potential market for cloud computing. A significant amount of research on homomorphic cryptography appeared in the literature over the last few years; yet the performance of existing implementations of encryption schemes remains unsuitable for real time applications. One way this limitation is being addressed is through the use of graphics processing units (GPUs) and field programmable gate arrays (FPGAs) for implementations of homomorphic encryption schemes. This review presents the current state of the art in this promising new area of research and highlights the interesting remaining open problems.
Resumo:
A fully homomorphic encryption (FHE) scheme is envisioned as a key cryptographic tool in building a secure and reliable cloud computing environment, as it allows arbitrary evaluation of a ciphertext without revealing the plaintext. However, existing FHE implementations remain impractical due to very high time and resource costs. To the authors’ knowledge, this paper presents the first hardware implementation of a full encryption primitive for FHE over the integers using FPGA technology. A large-integer multiplier architecture utilising Integer-FFT multiplication is proposed, and a large-integer Barrett modular reduction module is designed incorporating the proposed multiplier. The encryption primitive used in the integer-based FHE scheme is designed employing the proposed multiplier and modular reduction modules. The designs are verified using the Xilinx Virtex-7 FPGA platform. Experimental results show that a speed improvement factor of up to 44 is achievable for the hardware implementation of the FHE encryption scheme when compared to its corresponding software implementation. Moreover, performance analysis shows further speed improvements of the integer-based FHE encryption primitives may still be possible, for example through further optimisations or by targeting an ASIC platform.
Resumo:
Pseudomonas aeruginosa genotyping relies mainly upon DNA fingerprinting methods, which can be subjective, expensive and time-consuming. The detection of at least three different clonal P. aeruginosa strains in patients attending two cystic fibrosis (CF) centres in a single Australian city prompted the design of a non-gel-based PCR method to enable clinical microbiology laboratories to readily identify these clonal strains. We designed a detection method utilizing heat-denatured P. aeruginosa isolates and a ten-single-nucleotide polymorphism (SNP) profile. Strain differences were detected by SYBR Green-based real-time PCR and high-resolution melting curve analysis (HRM10SNP assay). Overall, 106 P. aeruginosa sputum isolates collected from 74 patients with CF, as well as five reference strains, were analysed with the HRM10SNP assay, and the results were compared with those obtained by pulsed-field gel electrophoresis (PFGE). The HRM10SNP assay accurately identified all 45 isolates as members of one of the three major clonal strains characterized by PFGE in two Brisbane CF centres (Australian epidemic strain-1, Australian epidemic strain-2 and P42) from 61 other P. aeruginosa strains from Australian CF patients and two representative overseas epidemic strain isolates. The HRM10SNP method is simple, is relatively inexpensive and can be completed in <3 h. In our setting, it could be made easily available for clinical microbiology laboratories to screen for local P. aeruginosa strains and to guide infection control policies. Further studies are needed to determine whether the HRM10SNP assay can also be modified to detect additional clonal strains that are prevalent in other CF centres.