946 resultados para random number generator
Resumo:
"Errata" ([3] p.) inserted.
Resumo:
Random number generation is a central component of modern information technology, with crucial applications in ensuring communications and information security. The development of new physical mechanisms suitable to directly generate random bit sequences is thus a subject of intense current research, with particular interest in alloptical techniques suitable for the generation of data sequences with high bit rate. One such promising technique that has received much recent attention is the chaotic semiconductor laser systems producing high quality random output as a result of the intrinsic nonlinear dynamics of its architecture [1]. Here we propose a novel complementary concept of all-optical technique that might dramatically increase the generation rate of random bits by using simultaneously multiple spectral channels with uncorrelated signals - somewhat similar to use of wave-division-multiplexing in communications. We propose to exploit the intrinsic nonlinear dynamics of extreme spectral broadening and supercontinuum (SC) generation in optical fibre, a process known to be often associated with non-deterministic fluctuations [2]. In this paper, we report proof-of concept results indicating that the fluctuations in highly nonlinear fibre SC generation can potentially be used for random number generation.
Resumo:
* Research supported by NATO GRANT CRG 900 798 and by Humboldt Award for U.S. Scientists.
Resumo:
2000 Mathematics Subject Classification: 60J80.
Resumo:
2000 Mathematics Subject Classification: 60J80, 62M05
Resumo:
Thermal fluctuation approach is widely used to monitor association kinetics of surface-bound receptor-ligand interactions. Various protocols such as sliding standard deviation (SD) analysis (SSA) and Page's test analysis (PTA) have been used to estimate two-dimensional (2D) kinetic rates from the time course of displacement of molecular carrier. In the current work, we compared the estimations from both SSA and modified PTA using measured data from an optical trap assay and simulated data from a random number generator. Our results indicated that both SSA and PTA were reliable in estimating 2D kinetic rates. Parametric analysis also demonstrated that such the estimations were sensitive to parameters such as sampling rate, sliding window size, and threshold. These results furthered the understandings in quantifying the biophysics of receptor-ligand interactions.
Resumo:
Single-electron devices (SEDs) have ultra-low power dissipation and high integration density, which make them promising candidates as basic circuit elements of the next generation VLSI circuits. In this paper, we propose two novel circuit single-electron architectures: the single-electron simulated annealing algorithm (SAA) circuit and the single-electron cellular neural network (CNN). We used the MOSFET-based single-electron turnstile [1] as the basic circuit element. The SAA circuit consists of the voltage-controlled single-electron random number generator [2] and the single-electron multiple-valued memories (SEMVs) [3]. The random-number generation and variable variations in SAA are easily achieved by transferring electrons using the single-electron turnstile. The CNN circuit used the floating-gate single-electron turnstile as the neural synapses, and the number of electrons is used to represent the cells states. These novel circuits are promising in future nanoscale integrated circuits.
Resumo:
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N ( 2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Resumo:
With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.
Resumo:
In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled
Resumo:
A new fast stream cipher, MAJE4 is designed and developed with a variable key size of 128-bit or 256-bit. The randomness property of the stream cipher is analysed by using the statistical tests. The performance evaluation of the stream cipher is done in comparison with another fast stream cipher called JEROBOAM. The focus is to generate a long unpredictable key stream with better performance, which can be used for cryptographic applications.
Resumo:
Background: Studies evaluating acceptability of simplified follow-up after medical abortion have focused on high-resource or urban settings where telephones, road connections, and modes of transport are available and where women have formal education. Objective: To investigate women's acceptability of home-assessment of abortion and whether acceptability of medical abortion differs by in-clinic or home-assessment of abortion outcome in a low-resource setting in India. Design: Secondary outcome of a randomised, controlled, non-inferiority trial. Setting Outpatient primary health care clinics in rural and urban Rajasthan, India. Population: Women were eligible if they sought abortion with a gestation up to 9 weeks, lived within defined study area and agreed to follow-up. Women were ineligible if they had known contraindications to medical abortion, haemoglobin < 85mg/l and were below 18 years. Methods: Abortion outcome assessment through routine clinic follow-up by a doctor was compared with home-assessment using a low-sensitivity pregnancy test and a pictorial instruction sheet. A computerized random number generator generated the randomisation sequence (1: 1) in blocks of six. Research assistants randomly allocated eligible women who opted for medical abortion (mifepristone and misoprostol), using opaque sealed envelopes. Blinding during outcome assessment was not possible. Main outcome measures: Women's acceptability of home-assessment was measured as future preference of follow-up. Overall satisfaction, expectations, and comparison with previous abortion experiences were compared between study groups. Results: 731 women were randomized to the clinic follow-up group (n = 353) or home-assessment group (n = 378). 623 (85%) women were successfully followed up, of those 597 (96%) were satisfied and 592 (95%) found the abortion better or as expected, with no difference between study groups. The majority, 355 (57%) women, preferred home-assessment in the event of a future abortion. Significantly more women, 284 (82%), in the home-assessment group preferred home-assessment in the future, as compared with 188 (70%) of women in the clinic follow-up group, who preferred clinic follow-up in the future (p < 0.001). Conclusion: Home-assessment is highly acceptable among women in low-resource, and rural, settings. The choice to follow-up an early medical abortion according to women's preference should be offered to foster women's reproductive autonomy.
Resumo:
The objective is to analyze the relationship between risk and number of stocks of a portfolio for an individual investor when stocks are chosen by "naive strategy". For this, we carried out an experiment in which individuals select actions to reproduce this relationship. 126 participants were informed that the risk of first choice would be an asset average of all standard deviations of the portfolios consist of a single asset, and the same procedure should be used for portfolios composed of two, three and so on, up to 30 actions . They selected the assets they want in their portfolios without the support of a financial analysis. For comparison we also tested a hypothetical simulation of 126 investors who selected shares the same universe, through a random number generator. Thus, each real participant is compensated for random hypothetical investor facing the same opportunity. Patterns were observed in the portfolios of individual participants, characterizing the curves for the components of the samples. Because these groupings are somewhat arbitrary, it was used a more objective measure of behavior: a simple linear regression for each participant, in order to predict the variance of the portfolio depending on the number of assets. In addition, we conducted a pooled regression on all observations by analyzing cross-section. The result of pattern occurs on average but not for most individuals, many of which effectively "de-diversify" when adding seemingly random bonds. Furthermore, the results are slightly worse using a random number generator. This finding challenges the belief that only a small number of titles is necessary for diversification and shows that there is only applicable to a large sample. The implications are important since many individual investors holding few stocks in their portfolios
Resumo:
We investigate the nonequilibrium roughening transition of a one-dimensional restricted solid-on-solid model by directly sampling the stationary probability density of a suitable order parameter as the surface adsorption rate varies. The shapes of the probability density histograms suggest a typical Ginzburg-Landau scenario for the phase transition of the model, and estimates of the "magnetic" exponent seem to confirm its mean-field critical behavior. We also found that the flipping times between the metastable phases of the model scale exponentially with the system size, signaling the breaking of ergodicity in the thermodynamic limit. Incidentally, we discovered that a closely related model not considered before also displays a phase transition with the same critical behavior as the original model. Our results support the usefulness of off-critical histogram techniques in the investigation of nonequilibrium phase transitions. We also briefly discuss in the appendix a good and simple pseudo-random number generator used in our simulations.