97 resultados para implementations


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The idea of proxying network connectivity has been proposed as an efficient mechanism to maintain network presence on behalf of idle devices, so that they can “sleep”. The concept has been around for many years; alternative architectural solutions have been proposed to implement it, which lead to different considerations about capability, effectiveness and energy efficiency. However, there is neither a clear understanding of the potential for energy saving nor a detailed performance comparison among the different proxy architectures. In this paper, we estimate the potential energy saving achievable by different architectural solutions for proxying network connectivity. Our work considers the trade-off between the saving achievable by putting idle devices to sleep and the additional power consumption to run the proxy. Our analysis encompasses a broad range of alternatives, taking into consideration both implementations already available in the market and prototypes built for research purposes. We remark that the main value of our work is the estimation under realistic conditions, taking into consideration power measurements, usage profiles and proxying capabilities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations.
Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with lamda_2-bits of precision. Performance results are promising in comparison to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing a stronger security proof; generate 1272 encryptions per second and 4395 decryptions per second.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large-scale commercial exploitation of wave energy is certain to require the deployment of wave energy converters (WECs) in arrays, creating ‘WEC farms’. An understanding of the hydrodynamic interactions in such arrays is essential for determining optimum layouts of WECs, as well as calculating the area of ocean that the farms will require. It is equally important to consider the potential impact of wave farms on the local and distal wave climates and coastal processes; a poor understanding of the resulting environmental impact may hamper progress, as it would make planning consents more difficult to obtain. It is therefore clear that an understanding the interactions between WECs within a farm is vital for the continued development of the wave energy industry.To support WEC farm design, a range of different numerical models have been developed, with both wave phase-resolving and wave phase-averaging models now available. Phase-resolving methods are primarily based on potential flow models and include semi-analytical techniques, boundary element methods and methods involving the mild-slope equations. Phase-averaging methods are all based around spectral wave models, with supra-grid and sub-grid wave farm models available as alternative implementations.The aims, underlying principles, strengths, weaknesses and obtained results of the main numerical methods currently used for modelling wave energy converter arrays are described in this paper, using a common framework. This allows a qualitative comparative analysis of the different methods to be performed at the end of the paper. This includes consideration of the conditions under which the models may be applied, the output of the models and the relationship between array size and computational effort. Guidance for developers is also presented on the most suitable numerical method to use for given aspects of WEC farm design. For instance, certain models are more suitable for studying near-field effects, whilst others are preferable for investigating far-field effects of the WEC farms. Furthermore, the analysis presented in this paper identifies areas in which the numerical modelling of WEC arrays is relatively weak and thus highlights those in which future developments are required.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

FPGAs and GPUs are often used when real-time performance in video processing is required. An accelerated processor is chosen based on task-specific priorities (power consumption, processing time and detection accuracy), and this decision is normally made once at design time. All three characteristics are important, particularly in battery-powered systems. Here we propose a method for moving selection of processing platform from a single design-time choice to a continuous run time one.We implement Histogram of Oriented Gradients (HOG) detectors for cars and people and Mixture of Gaussians (MoG) motion detectors running across FPGA, GPU and CPU in a heterogeneous system. We use this to detect illegally parked vehicles in urban scenes. Power, time and accuracy information for each detector is characterised. An anomaly measure is assigned to each detected object based on its trajectory and location, when compared to learned contextual movement patterns. This drives processor and implementation selection, so that scenes with high behavioural anomalies are processed with faster but more power hungry implementations, but routine or static time periods are processed with power-optimised, less accurate, slower versions. Real-time performance is evaluated on video datasets including i-LIDS. Compared to power-optimised static selection, automatic dynamic implementation mapping is 10% more accurate but draws 12W extra power in our testbed desktop system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estimates of HIV prevalence are important for policy in order to establish the health status of a country's population and to evaluate the effectiveness of population-based interventions and campaigns. However, participation rates in testing for surveillance conducted as part of household surveys, on which many of these estimates are based, can be low. HIV positive individuals may be less likely to participate because they fear disclosure, in which case estimates obtained using conventional approaches to deal with missing data, such as imputation-based methods, will be biased. We develop a Heckman-type simultaneous equation approach which accounts for non-ignorable selection, but unlike previous implementations, allows for spatial dependence and does not impose a homogeneous selection process on all respondents. In addition, our framework addresses the issue of separation, where for instance some factors are severely unbalanced and highly predictive of the response, which would ordinarily prevent model convergence. Estimation is carried out within a penalized likelihood framework where smoothing is achieved using a parametrization of the smoothing criterion which makes estimation more stable and efficient. We provide the software for straightforward implementation of the proposed approach, and apply our methodology to estimating national and sub-national HIV prevalence in Swaziland, Zimbabwe and Zambia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With security and surveillance, there is an increasing need to process image data efficiently and effectively either at source or in a large data network. Whilst a Field-Programmable Gate Array (FPGA) has been seen as a key technology for enabling this, the design process has been viewed as problematic in terms of the time and effort needed for implementation and verification. The work here proposes a different approach of using optimized FPGA-based soft-core processors which allows the user to exploit the task and data level parallelism to achieve the quality of dedicated FPGA implementations whilst reducing design time. The paper also reports some preliminary
progress on the design flow to program the structure. An implementation for a Histogram of Gradients algorithm is also reported which shows that a performance of 328 fps can be achieved with this design approach, whilst avoiding the long design time, verification and debugging steps associated with conventional FPGA implementations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the development of a viable quantum computer nears, existing widely used public-key cryptosystems, such as RSA, will no longer be secure. Thus, significant effort is being invested into post-quantum cryptography (PQC). Lattice-based cryptography (LBC) is one such promising area of PQC, which offers versatile, efficient, and high performance security services. However, the vulnerabilities of these implementations against side-channel attacks (SCA) remain significantly understudied. Most, if not all, lattice-based cryptosystems require noise samples generated from a discrete Gaussian distribution, and a successful timing analysis attack can render the whole cryptosystem broken, making the discrete Gaussian sampler the most vulnerable module to SCA. This research proposes countermeasures against timing information leakage with FPGA-based designs of the CDT-based discrete Gaussian samplers with constant response time, targeting encryption and signature scheme parameters. The proposed designs are compared against the state-of-the-art and are shown to significantly outperform existing implementations. For encryption, the proposed sampler is 9x faster in comparison to the only other existing time-independent CDT sampler design. For signatures, the first time-independent CDT sampler in hardware is proposed.