931 resultados para cryptographic pairing computation, elliptic curve cryptography


Relevância:

30.00% 30.00%

Publicador:

Resumo:

International audience

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document presents GEmSysC, an unified cryptographic API for embedded systems. Software layers implementing this API can be built over existing libraries, allowing embedded software to access cryptographic functions in a consistent way that does not depend on the underlying library. The API complies to good practices for API design and good practices for embedded software development and took its inspiration from other cryptographic libraries and standards. The main inspiration for creating GEmSysC was the CMSIS-RTOS standard, which defines an unified API for embedded software in an implementation-independent way, but targets operating systems instead of cryptographic functions. GEmSysC is made of a generic core and attachable modules, one for each cryptographic algorithm. This document contains the specification of the core of GEmSysC and three of its modules: AES, RSA and SHA-256. GEmSysC was built targeting embedded systems, but this does not restrict its use only in such systems – after all, embedded systems are just very limited computing devices. As a proof of concept, two implementations of GEmSysC were made. One of them was built over wolfSSL, which is an open source library for embedded systems. The other was built over OpenSSL, which is open source and a de facto standard. Unlike wolfSSL, OpenSSL does not specifically target embedded systems. The implementation built over wolfSSL was evaluated in a Cortex- M3 processor with no operating system while the implementation built over OpenSSL was evaluated on a personal computer with Windows 10 operating system. This document displays test results showing GEmSysC to be simpler than other libraries in some aspects. These results have shown that both implementations incur in little overhead in computation time compared to the cryptographic libraries themselves. The overhead of the implementation has been measured for each cryptographic algorithm and is between around 0% and 0.17% for the implementation over wolfSSL and between 0.03% and 1.40% for the one over OpenSSL. This document also presents the memory costs for each implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secure computation involves multiple parties computing a common function while keeping their inputs private, and is a growing field of cryptography due to its potential for maintaining privacy guarantees in real-world applications. However, current secure computation protocols are not yet efficient enough to be used in practice. We argue that this is due to much of the research effort being focused on generality rather than specificity. Namely, current research tends to focus on constructing and improving protocols for the strongest notions of security or for an arbitrary number of parties. However, in real-world deployments, these security notions are often too strong, or the number of parties running a protocol would be smaller. In this thesis we make several steps towards bridging the efficiency gap of secure computation by focusing on constructing efficient protocols for specific real-world settings and security models. In particular, we make the following four contributions: - We show an efficient (when amortized over multiple runs) maliciously secure two-party secure computation (2PC) protocol in the multiple-execution setting, where the same function is computed multiple times by the same pair of parties. - We improve the efficiency of 2PC protocols in the publicly verifiable covert security model, where a party can cheat with some probability but if it gets caught then the honest party obtains a certificate proving that the given party cheated. - We show how to optimize existing 2PC protocols when the function to be computed includes predicate checks on its inputs. - We demonstrate an efficient maliciously secure protocol in the three-party setting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Non-orthogonal multiple access (NOMA) is emerging as a promising multiple access technology for the fifth generation cellular networks to address the fast growing mobile data traffic. It applies superposition coding in transmitters, allowing simultaneous allocation of the same frequency resource to multiple intra-cell users. Successive interference cancellation is used at the receivers to cancel intra-cell interference. User pairing and power allocation (UPPA) is a key design aspect of NOMA. Existing UPPA algorithms are mainly based on exhaustive search method with extensive computation complexity, which can severely affect the NOMA performance. A fast proportional fairness (PF) scheduling based UPPA algorithm is proposed to address the problem. The novel idea is to form user pairs around the users with the highest PF metrics with pre-configured fixed power allocation. Systemlevel simulation results show that the proposed algorithm is significantly faster (seven times faster for the scenario with 20 users) with a negligible throughput loss than the existing exhaustive search algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We evaluated the performance of a novel procedure for segmenting mammograms and detecting clustered microcalcifications in two types of image sets obtained from digitization of mammograms using either a laser scanner, or a conventional ""optical"" scanner. Specific regions forming the digital mammograms were identified and selected, in which clustered microcalcifications appeared or not. A remarkable increase in image intensity was noticed in the images from the optical scanner compared with the original mammograms. A procedure based on a polynomial correction was developed to compensate the changes in the characteristic curves from the scanners, relative to the curves from the films. The processing scheme was applied to both sets, before and after the polynomial correction. The results indicated clearly the influence of the mammogram digitization on the performance of processing schemes intended to detect microcalcifications. The image processing techniques applied to mammograms digitized by both scanners, without the polynomial intensity correction, resulted in a better sensibility in detecting microcalcifications in the images from the laser scanner. However, when the polynomial correction was applied to the images from the optical scanner, no differences in performance were observed for both types of images. (C) 2008 SPIE and IS&T [DOI: 10.1117/1.3013544]

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several numerical methods for boundary value problems use integral and differential operational matrices, expressed in polynomial bases in a Hilbert space of functions. This work presents a sequence of matrix operations allowing a direct computation of operational matrices for polynomial bases, orthogonal or not, starting with any previously known reference matrix. Furthermore, it shows how to obtain the reference matrix for a chosen polynomial base. The results presented here can be applied not only for integration and differentiation, but also for any linear operation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present STAR results on the elliptic flow upsilon(2) Of charged hadrons, strange and multistrange particles from,root s(NN) = 200 GeV Au+Au collisions at the BNL Relativistic Heavy Ion Collider (RHIC). The detailed study of the centrality dependence of upsilon(2) over a broad transverse momentum range is presented. Comparisons of different analysis methods are made in order to estimate systematic uncertainties. To discuss the nonflow effect, we have performed the first analysis Of upsilon(2) with the Lee-Yang zero method for K(S)(0) and A. In the relatively low PT region, P(T) <= 2 GeV/c, a scaling with m(T) - m is observed for identified hadrons in each centrality bin studied. However, we do not observe nu 2(p(T))) scaled by the participant eccentricity to be independent of centrality. At higher PT, 2 1 <= PT <= 6 GeV/c, V2 scales with quark number for all hadrons studied. For the multistrange hadron Omega, which does not suffer appreciable hadronic interactions, the values of upsilon(2) are consistent with both m(T) - m scaling at low p(T) and number-of-quark scaling at intermediate p(T). As a function ofcollision centrality, an increase of p(T)-integrated upsilon(2) scaled by the participant eccentricity has been observed, indicating a stronger collective flow in more central Au+Au collisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Context. Rotation curves of interacting galaxies often show that velocities are either rising or falling in the direction of the companion galaxy. Aims. We seek to reproduce and analyse these features in the rotation curves of simulated equal-mass galaxies suffering a one-to-one encounter as possible indicators of close encounters. Methods. Using simulations of major mergers in 3D, we study the time evolution of these asymmetries in a pair of galaxies during the first passage. Results. Our main results are: (a) the rotation curve asymmetries appear right at pericentre of the first passage, (b) the significant disturbed rotation velocities occur within a small time interval, of similar to 0.5 Gyr h(-1), and, therefore, the presence of bifurcation in the velocity curve could be used as an indicator of the pericentre occurrence. These results are in qualitative agreement with previous findings for minor mergers and flybys.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differential measurements of the elliptic (upsilon(2)) and hexadecapole (upsilon(4)) Fourier flow coefficients are reported for charged hadrons as a function of transverse momentum (p(T)) and collision centrality or number of participant nucleons (N(part)) for Au + Au collisions at root s(NN) = 200 GeV/ The upsilon(2,4) measurements at pseudorapidity vertical bar eta vertical bar <= 0.35, obtained with four separate reaction-plane detectors positioned in the range 1.0 < vertical bar eta vertical bar < 3.9, show good agreement, indicating the absence of significant Delta eta-dependent nonflow correlations. Sizable values for upsilon(4)(p(T)) are observed with a ratio upsilon(4)(p(T), N(part))/upsilon(2)(2)(p(T), N(part)) approximate to 0.8 for 50 less than or similar to N(part) less than or similar to 200, which is compatible with the combined effects of a finite viscosity and initial eccentricity fluctuations. For N(part) greater than or similar to 200 this ratio increases up to 1.7 in the most central collisions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present inclusive charged hadron elliptic flow (v(2)) measured over the pseudorapidity range vertical bar eta vertical bar < 0.35 in Au+Au collisions at s(NN)=200 GeV. Results for v(2) are presented over a broad range of transverse momentum (p(T)=0.2-8.0 GeV/c) and centrality (0-60%). To study nonflow effects that are correlations other than collective flow, as well as the fluctuations of v(2), we compare two different analysis methods: (1) the event-plane method from two independent subdetectors at forward (vertical bar eta vertical bar=3.1-3.9) and beam (vertical bar eta vertical bar>6.5) pseudorapidities and (2) the two-particle cumulant method extracted using correlations between particles detected at midrapidity. The two event-plane results are consistent within systematic uncertainties over the measured p(T) and in centrality 0-40%. There is at most a 20% difference in the v(2) between the two event-plane methods in peripheral (40-60%) collisions. The comparisons between the two-particle cumulant results and the standard event-plane measurements are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show the effects of the granular structure of the initial conditions of a hydrodynamic description of high-energy nucleus-nucleus collisions on some observables, especially on the elliptic-flow parameter upsilon(2). Such a structure enhances production of isotropically distributed high-p(T) particles, making upsilon(2) smaller there. Also, it reduces upsilon(2) in the forward and backward regions where the global matter density is smaller and, therefore, where such effects become more efficacious.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present the results of an elliptic flow, v(2), analysis of Cu + Cu collisions recorded with the solenoidal tracker detector (STAR) at the BNL Relativistic Heavy Ion Collider at root s(NN) = 62.4 and 200 GeV. Elliptic flow as a function of transverse momentum, v(2)(p(T)), is reported for different collision centralities for charged hadrons h(+/-) and strangeness-ontaining hadrons K(S)(0), Lambda, Xi, and phi in the midrapidity region vertical bar eta vertical bar < 1.0. Significant reduction in systematic uncertainty of the measurement due to nonflow effects has been achieved by correlating particles at midrapidity, vertical bar eta vertical bar < 1.0, with those at forward rapidity, 2.5 < vertical bar eta vertical bar < 4.0. We also present azimuthal correlations in p + p collisions at root s = 200 GeV to help in estimating nonflow effects. To study the system-size dependence of elliptic flow, we present a detailed comparison with previously published results from Au + Au collisions at root s(NN) = 200 GeV. We observe that v(2)(p(T)) of strange hadrons has similar scaling properties as were first observed in Au + Au collisions, that is, (i) at low transverse momenta, p(T) < 2 GeV/c, v(2) scales with transverse kinetic energy, m(T) - m, and (ii) at intermediate p(T), 2 < p(T) < 4 GeV/c, it scales with the number of constituent quarks, n(q.) We have found that ideal hydrodynamic calculations fail to reproduce the centrality dependence of v(2)(p(T)) for K(S)(0) and Lambda. Eccentricity scaled v(2) values, v(2)/epsilon, are larger in more central collisions, suggesting stronger collective flow develops in more central collisions. The comparison with Au + Au collisions, which go further in density, shows that v(2)/epsilon depends on the system size, that is, the number of participants N(part). This indicates that the ideal hydrodynamic limit is not reached in Cu + Cu collisions, presumably because the assumption of thermalization is not attained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The evaluation of the electrical characteristics of technical HTS tapes are of the key importance in determining the design and operational features of superconducting power apparatuses as well as to understand the external factors which affect the superconducting performance. In this work we report the systematic measurements of the electric field versus current density, E-J relation of short samples for three commercial HTS tapes (BSCCO-2223 tapes, with and without steel reinforcement, and YBCO-coated conductor) at 77 K. In order to get sensitive and noiseless voltage signals the measurements were carried out with DC transport current and subjecting the broad surface tape to DC (0-300 mT) and AC (0-62 mT, 60 Hz) magnetic fields. The voltage is measured by a sensitive nanovoltmeter and the applied magnetic field is monitored by a Hall sensor placed on the tape broad surface. The comparison between the results obtained from the three tapes was done by fitting a power-law equation for currents in the vicinity of the critical current. For the current regime below the critical one a linear correlation of the electric field against the current density is observed. The BSCCO samples presented the same behavior, i.e., a decreasing of n-index with the increasing DC and AC magnetic field strength. Under AC field the decreasing slope of n-index is steeper as compared to DC field. The n-index curve for the YBCO tape showed similar behavior for AC field, however under DC field in the 0-390 mT range exhibited a slight decreasing of the n-index.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The one-way quantum computing model introduced by Raussendorf and Briegel [Phys. Rev. Lett. 86, 5188 (2001)] shows that it is possible to quantum compute using only a fixed entangled resource known as a cluster state, and adaptive single-qubit measurements. This model is the basis for several practical proposals for quantum computation, including a promising proposal for optical quantum computation based on cluster states [M. A. Nielsen, Phys. Rev. Lett. (to be published), quant-ph/0402005]. A significant open question is whether such proposals are scalable in the presence of physically realistic noise. In this paper we prove two threshold theorems which show that scalable fault-tolerant quantum computation may be achieved in implementations based on cluster states, provided the noise in the implementations is below some constant threshold value. Our first threshold theorem applies to a class of implementations in which entangling gates are applied deterministically, but with a small amount of noise. We expect this threshold to be applicable in a wide variety of physical systems. Our second threshold theorem is specifically adapted to proposals such as the optical cluster-state proposal, in which nondeterministic entangling gates are used. A critical technical component of our proofs is two powerful theorems which relate the properties of noisy unitary operations restricted to act on a subspace of state space to extensions of those operations acting on the entire state space. We expect these theorems to have a variety of applications in other areas of quantum-information science.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum computers promise to increase greatly the efficiency of solving problems such as factoring large integers, combinatorial optimization and quantum physics simulation. One of the greatest challenges now is to implement the basic quantum-computational elements in a physical system and to demonstrate that they can be reliably and scalably controlled. One of the earliest proposals for quantum computation is based on implementing a quantum bit with two optical modes containing one photon. The proposal is appealing because of the ease with which photon interference can be observed. Until now, it suffered from the requirement for non-linear couplings between optical modes containing few photons. Here we show that efficient quantum computation is possible using only beam splitters, phase shifters, single photon sources and photo-detectors. Our methods exploit feedback from photo-detectors and are robust against errors from photon loss and detector inefficiency. The basic elements are accessible to experimental investigation with current technology.