87 resultados para pseudo-random permutation
Resumo:
Background: Selection bias in HIV prevalence estimates occurs if non-participation in testing is correlated with HIV status. Longitudinal data suggests that individuals who know or suspect they are HIV positive are less likely to participate in testing in HIV surveys, in which case methods to correct for missing data which are based on imputation and observed characteristics will produce biased results. Methods: The identity of the HIV survey interviewer is typically associated with HIV testing participation, but is unlikely to be correlated with HIV status. Interviewer identity can thus be used as a selection variable allowing estimation of Heckman-type selection models. These models produce asymptotically unbiased HIV prevalence estimates, even when non-participation is correlated with unobserved characteristics, such as knowledge of HIV status. We introduce a new random effects method to these selection models which overcomes non-convergence caused by collinearity, small sample bias, and incorrect inference in existing approaches. Our method is easy to implement in standard statistical software, and allows the construction of bootstrapped standard errors which adjust for the fact that the relationship between testing and HIV status is uncertain and needs to be estimated. Results: Using nationally representative data from the Demographic and Health Surveys, we illustrate our approach with new point estimates and confidence intervals (CI) for HIV prevalence among men in Ghana (2003) and Zambia (2007). In Ghana, we find little evidence of selection bias as our selection model gives an HIV prevalence estimate of 1.4% (95% CI 1.2% – 1.6%), compared to 1.6% among those with a valid HIV test. In Zambia, our selection model gives an HIV prevalence estimate of 16.3% (95% CI 11.0% - 18.4%), compared to 12.1% among those with a valid HIV test. Therefore, those who decline to test in Zambia are found to be more likely to be HIV positive. Conclusions: Our approach corrects for selection bias in HIV prevalence estimates, is possible to implement even when HIV prevalence or non-participation is very high or very low, and provides a practical solution to account for both sampling and parameter uncertainty in the estimation of confidence intervals. The wide confidence intervals estimated in an example with high HIV prevalence indicate that it is difficult to correct statistically for the bias that may occur when a large proportion of people refuse to test.
Resumo:
We describe a pre-processing correlation attack on an FPGA implementation of AES, protected with a random clocking countermeasure that exhibits complex variations in both the location and amplitude of the power consumption patterns of the AES rounds. It is demonstrated that the merged round patterns can be pre-processed to identify and extract the individual round amplitudes, enabling a successful power analysis attack. We show that the requirement of the random clocking countermeasure to provide a varying execution time between processing rounds can be exploited to select a sub-set of data where sufficient current decay has occurred, further improving the attack. In comparison with the countermeasure's estimated security of 3 million traces from an integration attack, we show that through application of our proposed techniques that the countermeasure can now be broken with as few as 13k traces.
Resumo:
Authenticated encryption algorithms protect both the confidentiality and integrity of messages in a single processing pass. We show how to utilize the L◦P ◦S transform of the Russian GOST R 34.11-2012 standard hash “Streebog” to build an efficient, lightweight algorithm for Authenticated Encryption with Associated Data (AEAD) via the Sponge construction. The proposed algorithm “StriBob” has attractive security properties, is faster than the Streebog hash alone, twice as fast as the GOST 28147-89 encryption algorithm, and requires only a modest amount of running-time memory. StriBob is a Round 1 candidate in the CAESAR competition.
Resumo:
Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.
Resumo:
Camera traps are used to estimate densities or abundances using capture-recapture and, more recently, random encounter models (REMs). We deploy REMs to describe an invasive-native species replacement process, and to demonstrate their wider application beyond abundance estimation. The Irish hare Lepus timidus hibernicus is a high priority endemic of conservation concern. It is threatened by an expanding population of non-native, European hares L. europaeus, an invasive species of global importance. Camera traps were deployed in thirteen 1 km squares, wherein the ratio of invader to native densities were corroborated by night-driven line transect distance sampling throughout the study area of 1652 km2. Spatial patterns of invasive and native densities between the invader’s core and peripheral ranges, and native allopatry, were comparable between methods. Native densities in the peripheral range were comparable to those in native allopatry using REM, or marginally depressed using Distance Sampling. Numbers of the invader were substantially higher than the native in the core range, irrespective of method, with a 5:1 invader-to-native ratio indicating species replacement. We also describe a post hoc optimization protocol for REM which will inform subsequent (re-)surveys, allowing survey effort (camera hours) to be reduced by up to 57% without compromising the width of confidence intervals associated with density estimates. This approach will form the basis of a more cost-effective means of surveillance and monitoring for both the endemic and invasive species. The European hare undoubtedly represents a significant threat to the endemic Irish hare.
Resumo:
What is meant by the term random? Do we understand how to identify which type of randomisation to use in our future research projects? We, as researchers, often explain randomisation to potential research participants as being a 50/50 chance of selection to either an intervention or control group, akin to drawing numbers out of a hat. Is this an accurate explanation? And are all methods of randomisation equal? This paper aims to guide the researcher through the different techniques used to randomise participants with examples of how they can be used in educational research.
Resumo:
We have performed an R-matrix with pseudo-states (RMPS) calculation of electron-impact excitation in C2+.Collision strengths and effective collision strengths were determined for excitation between the lowest 24 terms, including all those arising from the 2s3l and 2s4l configurations. In the RMPS calculation, 238 terms (90 spectroscopic and 148 pseudo-state) were employed in the close-coupling (CC) expansion of the target. In order to investigate the significance of coupling to the target continuum and highly excited bound states, we compare the RMPS results with those from an R-matrix calculation that incorporated all 238 terms in the configuration- interaction expansion, but only the lowest 44 spectroscopic terms in the CC expansion. We also compare our effective collision strengths with those from an earlier 12-state R-matrix calculation (Berrington et al 1989 J. Phys. B: At.Mol. Opt. Phys. 22 665). The RMPS calculation was extremely large, involving (N +1)-electron Hamiltonian matrices of dimension up to 36 085, and required the use of our recently completed suite of parallel R-matrix programs. The full set of effective collision strengths fromourRMPS calculation is available at theOakRidgeNationalLaboratoryControlledFusion Atomic Data Center web site. 1.
Resumo:
Electron-impact ionization cross sections for argon are calculated using both non-perturbative R-matrix with pseudo-states (RMPS) and perturbative distorted-wave methods. At twice the ionization potential, the 3p(61)S ground-term cross section from a distorted-wave calculation is found to be a factor of 4 above crossed-beams experimental measurements, while with the inclusion of term-dependent continuum effects in the distorted-wave method, the perturbative cross section still remains almost a factor of 2 above experiment. In the case of ionization from the metastable 3p(5)4s(3)P term, the distorted-wave ionization cross section is also higher than the experimental cross section. On the other hand, the ground-term cross section determined from a nonperturbative RMPS calculation that includes 27 LS spectroscopic terms and another 282 LS pseudo-state terms to represent the high Rydberg states, and the target continuum is found to be in excellent agreement with experimental measurements, while the RMPS result is below the experimental cross section for ionization from the metastable term. We conclude that both continuum term dependence and interchannel coupling effects, which are included in the RMPS method, are important for ionization from the ground term, and interchannel coupling is also significant for ionization from the metastable term
Resumo:
A new heuristic based on Nawaz–Enscore–Ham (NEH) algorithm is proposed for solving permutation flowshop scheduling problem in this paper. A new priority rule is proposed by accounting for the average, mean absolute deviation, skewness and kurtosis, in order to fully describe the distribution style of processing times. A new tie-breaking rule is also introduced for achieving effective job insertion for the objective of minimizing both makespan and machine idle-time. Statistical tests illustrate better solution quality of the proposed algorithm, comparing to existing benchmark heuristics.