9 resultados para Interval discrete log problem
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Digital signatures are an important primitive for building secure systems and are used in most real-world security protocols. However, almost all popular signature schemes are either based on the factoring assumption (RSA) or the hardness of the discrete logarithm problem (DSA/ECDSA). In the case of classical cryptanalytic advances or progress on the development of quantum computers, the hardness of these closely related problems might be seriously weakened. A potential alternative approach is the construction of signature schemes based on the hardness of certain lattice problems that are assumed to be intractable by quantum computers. Due to significant research advancements in recent years, lattice-based schemes have now become practical and appear to be a very viable alternative to number-theoretic cryptography. In this article, we focus on recent developments and the current state of the art in lattice-based digital signatures and provide a comprehensive survey discussing signature schemes with respect to practicality. Additionally, we discuss future research areas that are essential for the continued development of lattice-based cryptography.
Resumo:
Understanding how the timing of motor output is coupled to sensory temporal information is largely based on synchronisation of movements through small motion gaps (finger taps) to mostly empty sensory intervals (discrete beats). This study investigated synchronisation of movements between target barriers over larger motion gaps when closing time gaps of intervals were presented as either continuous, dynamic sounds, or discrete beats. Results showed that although synchronisation errors were smaller for discrete sounds, the variability of errors was lower for continuous sounds. Furthermore, finger movement between targets was found to be more sinusoidal when continuous sensory information was presented during intervals compared to discrete. When movements were made over larger amplitudes, synchronisation errors tended to be more positive and movements between barriers more sinusoidal, than for movements over shorter amplitudes. These results show that the temporal control of movement is not independent from the form of the sensory information that specifies time gaps or the magnitude of the movement required for synchronisation.
Resumo:
A flexible, mass-conservative numerical technique for solving the advection-dispersion equation for miscible contaminant transport is presented. The method combines features of puff transport models from air pollution studies with features from the random walk particle method used in water resources studies, providing a deterministic time-marching algorithm which is independent of the grid Peclet number and scales from one to higher dimensions simply. The concentration field is discretised into a number of particles, each of which is treated as a point release which advects and disperses over the time interval. The dispersed puff is itself discretised into a spatial distribution of particles whose masses can be pre-calculated. Concentration within the simulation domain is then calculated from the mass distribution as an average over some small volume. Comparison with analytical solutions for a one-dimensional fixed-duration concentration pulse and for two-dimensional transport in an axisymmetric flow field indicate that the algorithm performs well. For a given level of accuracy the new method has lower computation times than the random walk particle method.
Resumo:
The identification of nonlinear dynamic systems using radial basis function (RBF) neural models is studied in this paper. Given a model selection criterion, the main objective is to effectively and efficiently build a parsimonious compact neural model that generalizes well over unseen data. This is achieved by simultaneous model structure selection and optimization of the parameters over the continuous parameter space. It is a mixed-integer hard problem, and a unified analytic framework is proposed to enable an effective and efficient two-stage mixed discrete-continuous; identification procedure. This novel framework combines the advantages of an iterative discrete two-stage subset selection technique for model structure determination and the calculus-based continuous optimization of the model parameters. Computational complexity analysis and simulation studies confirm the efficacy of the proposed algorithm.
Testing the stability of the benefit transfer function for discrete choice contingent valuation data
Resumo:
This paper examines the stability of the benefit transfer function across 42 recreational forests in the British Isles. A working definition of reliable function transfer is Put forward, and a suitable statistical test is provided. A novel split sample method is used to test the sensitivity of the models' log-likelihood values to the removal of contingent valuation (CV) responses collected at individual forest sites, We find that a stable function improves Our measure of transfer reliability, but not by much. We conclude that, in empirical Studies on transferability, considerations of function stability are secondary to the availability and quality of site attribute data. Modellers' can study the advantages of transfer function stability vis-a-vis the value of additional information on recreation site attributes. (c) 2008 Elsevier GmbH. All rights reserved.
Resumo:
In this study we show that forest areas contribute significantly to the estimated benefits from om outdoor recreation in Northern Ireland. Secondly we provide empirical evidence of the gains in the statistical efficiency of both benefit and parameter estimates obtained by analysing follow-up responses with Double Bounded interval data analysis. As these gains are considerable, it is clearly worth considering this method in CVM survey design even when moderately large sample sizes are used. Finally we demonstrate that estimates of means and medians of WTP distributions for access to forest recreation show plausible magnitude, are consistent with previous UK studies, and converge across parametric and non-parametic methods of estimation.
Resumo:
An adhesive elasto-plastic contact model for the discrete element method with three dimensional non-spherical particles is proposed and investigated to achieve quantitative prediction of cohesive powder flowability. Simulations have been performed for uniaxial consolidation followed by unconfined compression to failure using this model. The model has been shown to be capable of predicting the experimental flow function (unconfined compressive strength vs. the prior consolidation stress) for a limestone powder which has been selected as a reference solid in the Europe wide PARDEM research network. Contact plasticity in the model is shown to affect the flowability significantly and is thus essential for producing satisfactory computations of the behaviour of a cohesive granular material. The model predicts a linear relationship between a normalized unconfined compressive strength and the product of coordination number and solid fraction. This linear relationship is in line with the Rumpf model for the tensile strength of particulate agglomerate. Even when the contact adhesion is forced to remain constant, the increasing unconfined strength arising from stress consolidation is still predicted, which has its origin in the contact plasticity leading to microstructural evolution of the coordination number. The filled porosity is predicted to increase as the contact adhesion increases. Under confined compression, the porosity reduces more gradually for the load-dependent adhesion compared to constant adhesion. It was found that the contribution of adhesive force to the limiting friction has a significant effect on the bulk unconfined strength. The results provide new insights and propose a micromechanical based measure for characterising the strength and flowability of cohesive granular materials.
Resumo:
Retinopathy of prematurity (ROP) is a rare disease in which retinal blood vessels of premature infants fail to develop normally, and is one of the major causes of childhood blindness throughout the world. The Discrete Conditional Phase-type (DC-Ph) model consists of two components, the conditional component measuring the inter-relationships between covariates and the survival component which models the survival distribution using a Coxian phase-type distribution. This paper expands the DC-Ph models by introducing a support vector machine (SVM), in the role of the conditional component. The SVM is capable of classifying multiple outcomes and is used to identify the infant's risk of developing ROP. Class imbalance makes predicting rare events difficult. A new class decomposition technique, which deals with the problem of multiclass imbalance, is introduced. Based on the SVM classification, the length of stay in the neonatal ward is modelled using a 5, 8 or 9 phase Coxian distribution.
Resumo:
In-situ characterisation of thermocouple sensors is a challenging problem. Recently the authors presented a blind characterisation technique based on the cross-relation method of blind identification. The method allows in-situ identification of two thermocouple probes, each with a different dynamic response, using only sampled sensor measurement data. While the technique offers certain advantages over alternative methods, including low estimation variance and the ability to compensate for noise induced bias, the robustness of the method is limited by the multimodal nature of the cost function. In this paper, a normalisation term is proposed which improves the convexity of
the cost function. Further, a normalisation and bias compensation hybrid approach is presented that exploits the advantages of both normalisation and bias compensation. It is found that the optimum of the hybrid cost function is less biased and more stable than when only normalisation is applied. All results were verified by simulation.