931 resultados para cryptographic pairing computation, elliptic curve cryptography
Resumo:
This article presents the results of a combined experimental and theoretical study of fracture and resistance-curve behavior of hybrid natural fiber- and synthetic polymer fiber-reinforced composites that are being developed for potential applications in affordable housing. Fracture and resistance-curve behavior are studied using single-edge notched bend specimens. The sisal fibers used were examined using atomic force microscopy for fiber bundle structures. The underlying crack/microstructure interactions and fracture mechanisms are elucidated via in situ optical microscopy and ex-situ environmental scanning microscopy techniques. The observed crack bridging mechanisms are modeled using small and large scale bridging concepts. The implications of the results are then discussed for the design of eco-friendly building materials that are reinforced with natural and polypropylene fibers.
Resumo:
This work evaluates the efficiency of economic levels of theory for the prediction of (3)J(HH) spin-spin coupling constants, to be used when robust electronic structure methods are prohibitive. To that purpose, DFT methods like mPW1PW91. B3LYP and PBEPBE were used to obtain coupling constants for a test set whose coupling constants are well known. Satisfactory results were obtained in most of cases, with the mPW1PW91/6-31G(d,p)//B3LYP/6-31G(d,p) leading the set. In a second step. B3LYP was replaced by the semiempirical methods PM6 and RM1 in the geometry optimizations. Coupling constants calculated with these latter structures were at least as good as the ones obtained by pure DFT methods. This is a promising result, because some of the main objectives of computational chemistry - low computational cost and time, allied to high performance and precision - were attained together. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we investigate the behavior of a family of steady-state solutions of a nonlinear reaction diffusion equation when some reaction and potential terms are concentrated in a e-neighborhood of a portion G of the boundary. We assume that this e-neighborhood shrinks to G as the small parameter e goes to zero. Also, we suppose the upper boundary of this e-strip presents a highly oscillatory behavior. Our main goal here was to show that this family of solutions converges to the solutions of a limit problem, a nonlinear elliptic equation that captures the oscillatory behavior. Indeed, the reaction term and concentrating potential are transformed into a flux condition and a potential on G, which depends on the oscillating neighborhood. Copyright (C) 2012 John Wiley & Sons, Ltd.
Resumo:
We revisit the issue of the constancy of the dark matter (DM) and baryonic Newtonian acceleration scales within the DM scale radius by considering a large sample of late-type galaxies. We rely on a Markov Chain Monte Carlo method to estimate the parameters of the halo model and the stellar mass-to-light ratio and then propagate the uncertainties from the rotation curve data to the estimate of the acceleration scales. This procedure allows us to compile a catalogue of 58 objects with estimated values of the B-band absolute magnitude M-B, the virial mass M-vir, and the DM and baryonic Newtonian accelerations (denoted as g(DM)(r(0)) and g(bar)(r(0)), respectively) within the scale radius r(0) which we use to investigate whether it is possible to define a universal acceleration scale. We find a weak but statistically meaningful correlation with M-vir thus making us argue against the universality of the acceleration scales. However, the results somewhat depend on the sample adopted so that a careful analysis of selection effects should be carried out before any definitive conclusion can be drawn.
Resumo:
The definition of the sample size is a major problem in studies of phytosociology. The species accumulation curve is used to define the sampling sufficiency, but this method presents some limitations such as the absence of a stabilization point that can be objectively determined and the arbitrariness of the order of sampling units in the curve. A solution to this problem is the use of randomization procedures, e. g. permutation, for obtaining a mean species accumulation curve and empiric confidence intervals. However, the randomization process emphasizes the asymptotical character of the curve. Moreover, the inexistence of an inflection point in the curve makes it impossible to define objectively the point of optimum sample size.
Resumo:
A systematic study is presented for centrality, transverse momentum (p(T)), and pseudorapidity (eta) dependence of the inclusive charged hadron elliptic flow (v(2)) at midrapidity (vertical bar eta vertical bar < 1.0) in Au + Au collisions at root s(NN) = 7.7, 11.5, 19.6, 27, and 39 GeV. The results obtained with different methods, including correlations with the event plane reconstructed in a region separated by a large pseudorapidity gap and four-particle cumulants (v(2){4}), are presented to investigate nonflow correlations and v(2) fluctuations. We observe that the difference between v(2){2} and v(2){4} is smaller at the lower collision energies. Values of v(2), scaled by the initial coordinate space eccentricity, v(2)/epsilon, as a function of p(T) are larger in more central collisions, suggesting stronger collective flow develops in more central collisions, similar to the results at higher collision energies. These results are compared to measurements at higher energies at the Relativistic Heavy Ion Collider (root s(NN) = 62.4 and 200 GeV) and at the Large Hadron Collider (Pb + Pb collisions at root s(NN) = 2.76 TeV). The v(2)(pT) values for fixed pT rise with increasing collision energy within the pT range studied (<2 GeV/c). A comparison to viscous hydrodynamic simulations is made to potentially help understand the energy dependence of v(2)(pT). We also compare the v(2) results to UrQMD and AMPT transport model calculations, and physics implications on the dominance of partonic versus hadronic phases in the system created at beam energy scan energies are discussed.
Resumo:
We analytically study the input-output properties of a neuron whose active dendritic tree, modeled as a Cayley tree of excitable elements, is subjected to Poisson stimulus. Both single-site and two-site mean-field approximations incorrectly predict a nonequilibrium phase transition which is not allowed in the model. We propose an excitable-wave mean-field approximation which shows good agreement with previously published simulation results [Gollo et al., PLoS Comput. Biol. 5, e1000402 (2009)] and accounts for finite-size effects. We also discuss the relevance of our results to experiments in neuroscience, emphasizing the role of active dendrites in the enhancement of dynamic range and in gain control modulation.
Resumo:
Measurement-based quantum computation is an efficient model to perform universal computation. Nevertheless, theoretical questions have been raised, mainly with respect to realistic noise conditions. In order to shed some light on this issue, we evaluate the exact dynamics of some single-qubit-gate fidelities using the measurement-based quantum computation scheme when the qubits which are used as a resource interact with a common dephasing environment. We report a necessary condition for the fidelity dynamics of a general pure N-qubit state, interacting with this type of error channel, to present an oscillatory behavior, and we show that for the initial canonical cluster state, the fidelity oscillates as a function of time. This state fidelity oscillatory behavior brings significant variations to the values of the computational results of a generic gate acting on that state depending on the instants we choose to apply our set of projective measurements. As we shall see, considering some specific gates that are frequently found in the literature, the fast application of the set of projective measurements does not necessarily imply high gate fidelity, and likewise the slow application thereof does not necessarily imply low gate fidelity. Our condition for the occurrence of the fidelity oscillatory behavior shows that the oscillation presented by the cluster state is due exclusively to its initial geometry. Other states that can be used as resources for measurement-based quantum computation can present the same initial geometrical condition. Therefore, it is very important for the present scheme to know when the fidelity of a particular resource state will oscillate in time and, if this is the case, what are the best times to perform the measurements.
Resumo:
A regional envelope curve (REC) of flood flows summarises the current bound on our experience of extreme floods in a region. RECs are available for most regions of the world. Recent scientific papers introduced a probabilistic interpretation of these curves and formulated an empirical estimator of the recurrence interval T associated with a REC, which, in principle, enables us to use RECs for design purposes in ungauged basins. The main aim of this work is twofold. First, it extends the REC concept to extreme rainstorm events by introducing the Depth-Duration Envelope Curves (DDEC), which are defined as the regional upper bound on all the record rainfall depths at present for various rainfall duration. Second, it adapts the probabilistic interpretation proposed for RECs to DDECs and it assesses the suitability of these curves for estimating the T-year rainfall event associated with a given duration and large T values. Probabilistic DDECs are complementary to regional frequency analysis of rainstorms and their utilization in combination with a suitable rainfall-runoff model can provide useful indications on the magnitude of extreme floods for gauged and ungauged basins. The study focuses on two different national datasets, the peak over threshold (POT) series of rainfall depths with duration 30 min., 1, 3, 9 and 24 hrs. obtained for 700 Austrian raingauges and the Annual Maximum Series (AMS) of rainfall depths with duration spanning from 5 min. to 24 hrs. collected at 220 raingauges located in northern-central Italy. The estimation of the recurrence interval of DDEC requires the quantification of the equivalent number of independent data which, in turn, is a function of the cross-correlation among sequences. While the quantification and modelling of intersite dependence is a straightforward task for AMS series, it may be cumbersome for POT series. This paper proposes a possible approach to address this problem.
Resumo:
The sustained demand for faster,more powerful chips has beenmet by the availability of chip manufacturing processes allowing for the integration of increasing numbers of computation units onto a single die. The resulting outcome, especially in the embedded domain, has often been called SYSTEM-ON-CHIP (SOC) or MULTI-PROCESSOR SYSTEM-ON-CHIP (MPSOC). MPSoC design brings to the foreground a large number of challenges, one of the most prominent of which is the design of the chip interconnection. With a number of on-chip blocks presently ranging in the tens, and quickly approaching the hundreds, the novel issue of how to best provide on-chip communication resources is clearly felt. NETWORKS-ON-CHIPS (NOCS) are the most comprehensive and scalable answer to this design concern. By bringing large-scale networking concepts to the on-chip domain, they guarantee a structured answer to present and future communication requirements. The point-to-point connection and packet switching paradigms they involve are also of great help in minimizing wiring overhead and physical routing issues. However, as with any technology of recent inception, NoC design is still an evolving discipline. Several main areas of interest require deep investigation for NoCs to become viable solutions: • The design of the NoC architecture needs to strike the best tradeoff among performance, features and the tight area and power constraints of the on-chip domain. • Simulation and verification infrastructure must be put in place to explore, validate and optimize the NoC performance. • NoCs offer a huge design space, thanks to their extreme customizability in terms of topology and architectural parameters. Design tools are needed to prune this space and pick the best solutions. • Even more so given their global, distributed nature, it is essential to evaluate the physical implementation of NoCs to evaluate their suitability for next-generation designs and their area and power costs. This dissertation focuses on all of the above points, by describing a NoC architectural implementation called ×pipes; a NoC simulation environment within a cycle-accurate MPSoC emulator called MPARM; a NoC design flow consisting of a front-end tool for optimal NoC instantiation, called SunFloor, and a set of back-end facilities for the study of NoC physical implementations. This dissertation proves the viability of NoCs for current and upcoming designs, by outlining their advantages (alongwith a fewtradeoffs) and by providing a full NoC implementation framework. It also presents some examples of additional extensions of NoCs, allowing e.g. for increased fault tolerance, and outlines where NoCsmay find further application scenarios, such as in stacked chips.