356 resultados para Random Number Generation

em Queensland University of Technology - ePrints Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modular arithmetic has often been regarded as something of a mathematical curiosity, at least by those unfamiliar with its importance to both abstract algebra and number theory, and with its numerous applications. However, with the ubiquity of fast digital computers, and the need for reliable digital security systems such as RSA, this important branch of mathematics is now considered essential knowledge for many professionals. Indeed, computer arithmetic itself is, ipso facto, modular. This chapter describes how the modern graphical spreadsheet may be used to clearly illustrate the basics of modular arithmetic, and to solve certain classes of problems. Students may then gain structural insight and the foundations laid for applications to such areas as hashing, random number generation, and public-key cryptography.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Popular wireless network standards, such as IEEE 802.11/15/16, are increasingly adopted in real-time control systems. However, they are not designed for real-time applications. Therefore, the performance of such wireless networks needs to be carefully evaluated before the systems are implemented and deployed. While efforts have been made to model general wireless networks with completely random traffic generation, there is a lack of theoretical investigations into the modelling of wireless networks with periodic real-time traffic. Considering the widely used IEEE 802.11 standard, with the focus on its distributed coordination function (DCF), for soft-real-time control applications, this paper develops an analytical Markov model to quantitatively evaluate the network quality-of-service (QoS) performance in periodic real-time traffic environments. Performance indices to be evaluated include throughput capacity, transmission delay and packet loss ratio, which are crucial for real-time QoS guarantee in real-time control applications. They are derived under the critical real-time traffic condition, which is formally defined in this paper to characterize the marginal satisfaction of real-time performance constraints.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Choi et al. recently proposed an efficient RFID authentication protocol for a ubiquitous computing environment, OHLCAP(One-Way Hash based Low-Cost Authentication Protocol). However, this paper reveals that the protocol has several security weaknesses : 1) traceability based on the leakage of counter information, 2) vulnerability to an impersonation attack by maliciously updating a random number, and 3) traceability based on a physically-attacked tag. Finally, a security enhanced group-based authentication protocol is presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Skin temperature is an important physiological measure that can reflect the presence of illness and injury as well as provide insight into the localised interactions between the body and the environment. The aim of this systematic review was to analyse the agreement between conductive and infrared means of assessing skin temperature which are commonly employed in in clinical, occupational, sports medicine, public health and research settings. Full-text eligibility was determined independently by two reviewers. Studies meeting the following criteria were included in the review: 1) the literature was written in English, 2) participants were human (in vivo), 3) skin surface temperature was assessed at the same site, 4) with at least two commercially available devices employed—one conductive and one infrared—and 5) had skin temperature data reported in the study. A computerised search of four electronic databases, using a combination of 21 keywords, and citation tracking was performed in January 2015. A total of 8,602 were returned. Methodology quality was assessed by 2 authors independently, using the Cochrane risk of bias tool. A total of 16 articles (n = 245) met the inclusion criteria. Devices are classified to be in agreement if they met the clinically meaningful recommendations of mean differences within ±0.5 °C and limits of agreement of ±1.0 °C. Twelve of the included studies found mean differences greater than ±0.5 °C between conductive and infrared devices. In the presence of external stimulus (e.g. exercise and/or heat) five studies foundexacerbated measurement differences between conductive and infrared devices. This is the first review that has attempted to investigate presence of any systemic bias between infrared and conductive measures by collectively evaluating the current evidence base. There was also a consistently high risk of bias across the studies, in terms of sample size, random sequence generation, allocation concealment, blinding and incomplete outcome data. This systematic review questions the suitability of using infrared cameras in stable, resting, laboratory conditions. Furthermore, both infrared cameras and thermometers in the presence of sweat and environmental heat demonstrate poor agreement when compared to conductive devices. These findings have implications for clinical, occupational, public health, sports science and research fields.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0⩽trandom collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where fsi(·)'s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali–Sidney scheme is a special case of this general construction. We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In Crypto’95, Micali and Sidney proposed a method for shared generation of a pseudo-random function f(·) among n players in such a way that for all the inputs x, any u players can compute f(x) while t or fewer players fail to do so, where 0 ≤ t < u ≤ n. The idea behind the Micali-Sidney scheme is to generate and distribute secret seeds S = s1, . . . , sd of a poly-random collection of functions, among the n players, each player gets a subset of S, in such a way that any u players together hold all the secret seeds in S while any t or fewer players will lack at least one element from S. The pseudo-random function is then computed as where f s i (·)’s are poly-random functions. One question raised by Micali and Sidney is how to distribute the secret seeds satisfying the above condition such that the number of seeds, d, is as small as possible. In this paper, we continue the work of Micali and Sidney. We first provide a general framework for shared generation of pseudo-random function using cumulative maps. We demonstrate that the Micali-Sidney scheme is a special case of this general construction.We then derive an upper and a lower bound for d. Finally we give a simple, yet efficient, approximation greedy algorithm for generating the secret seeds S in which d is close to the optimum by a factor of at most u ln 2.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Next Generation Sequencing (NGS) has revolutionised molecular biology, resulting in an explosion of data sets and an increasing role in clinical practice. Such applications necessarily require rapid identification of the organism as a prelude to annotation and further analysis. NGS data consist of a substantial number of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. Highly accurate results have been obtained for restricted sets using SVM classifiers, but such methods are difficult to parallelise and success depends on careful attention to feature selection. This work examines the problem at very large scale, using a mix of synthetic and real data with a view to determining the overall structure of the problem and the effectiveness of parallel ensembles of simpler classifiers (principally random forests) in addressing the challenges of large scale genomics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Next Generation Sequencing (NGS) has revolutionised molecular biology, resulting in an explosion of data sets and an increasing role in clinical practice. Such applications necessarily require rapid identification of the organism as a prelude to annotation and further analysis. NGS data consist of a substantial number of short sequence reads, given context through downstream assembly and annotation, a process requiring reads consistent with the assumed species or species group. Highly accurate results have been obtained for restricted sets using SVM classifiers, but such methods are difficult to parallelise and success depends on careful attention to feature selection. This work examines the problem at very large scale, using a mix of synthetic and real data with a view to determining the overall structure of the problem and the effectiveness of parallel ensembles of simpler classifiers (principally random forests) in addressing the challenges of large scale genomics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study analyses and compares the cost efficiency of Japanese steam power generation companies using the fixed and random Bayesian frontier models. We show that it is essential to account for heterogeneity in modelling the performance of energy companies. Results from the model estimation also indicate that restricting CO2 emissions can lead to a decrease in total cost. The study finally discusses the efficiency variations between the energy companies under analysis, and elaborates on the managerial and policy implications of the results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cerebral responses to alternating periods of a control task and a selective letter generation paradigm were investigated with functional Magnetic Resonance Imaging (fMRI). Subjects selectively generated letters from four designated sets of six letters from the English language alphabet, with the instruction that they were not to produce letters in alphabetical order either forward or backward, repeat or alternate letters. Performance during this condition was compared with that of a control condition in which subjects recited the same letters in alphabetical order. Analyses revealed significant and extensive foci of activation in a number of cerebral regions including mid-dorsolateral frontal cortex, inferior frontal gyrus, precuneus, supramarginal gyrus, and cerebellum during the selective letter generation condition. These findings are discussed with respect to recent positron emission tomography (PET) and fMRI studies of verbal working memory and encoding/retrieval in episodic memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to characterise and quantify the fungal fragment propagules derived and released from several fungal species (Penicillium, Aspergillus niger and Cladosporium cladosporioides) using different generation methods and different air velocities over the colonies. Real time fungal spore fragmentation was investigated using an Ultraviolet Aerodynamic Particle Sizer (UVASP) and a Scanning Mobility Particle Sizer (SMPS). The study showed that there were significant differences (p < 0.01) in the fragmentation percentage between different air velocities for the three generation methods, namely the direct, the fan and the fungal spore source strength tester (FSSST) methods. The percentage of fragmentation also proved to be dependant on fungal species. The study found that there was no fragmentation for any of the fungal species at an air velocity ≤ 0.4 m/s for any method of generation. Fluorescent signals, as well as mathematical determination also showed that the fungal fragments were derived from spores. Correlation analysis showed that the number of released fragments measured by the UVAPS under controlled conditions can be predicted on the basis of the number of spores, for Penicillium and Aspergillus niger, but not for Cladosporium cladosporioides. The fluorescence percentage of fragment samples was found to be significantly different to that of non-fragment samples (p < 0.0001) and the fragment sample fluorescence was always less than that of the non-fragment samples. Size distribution and concentration of fungal fragment particles were investigated qualitatively and quantitatively, by both UVAPS and SMPS, and it was found that the UVAPS was more sensitive than the SMPS for measuring small sample concentrations, and the results obtained from the UVAPS and SMAS were not identical for the same samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Channel measurements and simulations have been carried out to observe the effects of pedestrian movement on multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM) channel capacity. An in-house built MIMO-OFDM packet transmission demonstrator equipped with four transmitters and four receivers has been utilized to perform channel measurements at 5.2 GHz. Variations in the channel capacity dynamic range have been analysed for 1 to 10 pedestrians and different antenna arrays (2 × 2, 3 × 3 and 4 × 4). Results show a predicted 5.5 bits/s/Hz and a measured 1.5 bits/s/Hz increment in the capacity dynamic range with the number of pedestrian and the number of antennas in the transmitter and receiver array.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improving efficiency and flexibility in pulsed power supply technologies are the most substantial concerns of pulsed power systems specifically for plasma generation. Recently, the improvement of pulsed power supply becomes of greater concern due to extension of pulsed power applications to environmental and industrial areas. A current source based topology is proposed in this paper which gives the possibility of power flow control. The main contribution in this configuration is utilization of low-medium voltage semiconductor switches for high voltage generation. A number of switch-diode-capacitor units are designated at the output of topology to exchange the current source energy into voltage form and generate a pulsed power with sufficient voltage magnitude and stress. Simulations have been carried out in Matlab/SIMULINK platform to verify the capability of this topology in performing desired duties. Being efficient and flexible are the main advantages of this topology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research discusses some of the issues encountered while developing a set of WGEN parameters for Chile and advice for others interested in developing WGEN parameters for arid climates. The WGEN program is a commonly used and a valuable research tool; however, it has specific limitations in arid climates that need careful consideration. These limitations are analysed in the context of generating a set of WGEN parameters for Chile. Fourteen to 26 years of precipitation data are used to calculate precipitation parameters for 18 locations in Chile, and 3–8 years of temperature and solar radiation data are analysed to generate parameters for seven of these locations. Results indicate that weather generation parameters in arid regions are sensitive to erroneous or missing precipitation data. Research shows that the WGEN-estimated gamma distribution shape parameter (α) for daily precipitation in arid zones will tend to cluster around discrete values of 0 or 1, masking the high sensitivity of these parameters to additional data. Rather than focus on the length in years when assessing the adequacy of a data record for estimation of precipitation parameters, researchers should focus on the number of wet days in dry months in a data set. Analysis of the WGEN routines for the estimation of temperature and solar radiation parameters indicates that errors can occur when individual ‘months’ have fewer than two wet days in the data set. Recommendations are provided to improve methods for estimation of WGEN parameters in arid climates.