82 resultados para 2D barcode based authentication scheme
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
In this paper, a coupling of fluorophore-DNA barcode and bead-based immunoassay for detecting avian influenza virus (AIV) with PCR-like sensitivity is reported. The assay is based on the use of sandwich immunoassay and fluorophore-tagged oligonucleotides as representative barcodes. The detection involves the sandwiching of the target AIV between magnetic immunoprobes and barcode-carrying immunoprobes. Because each barcode-carrying immunoprobe is functionalized with a multitude of fluorophore-DNA barcode strands, many DNA barcodes are released for each positive binding event resulting in amplification of the signal. Using an inactivated H16N3 AIV as a model, a linear response over five orders of magnitude was obtained, and the sensitivity of the detection was comparable to conventional RT-PCR. Moreover, the entire detection required less than 2 hr. The results indicate that the method has great potential as an alternative for surveillance of epidemic outbreaks caused by AIV, other viruses and microorganisms.
Resumo:
In this paper, we report a coupling of fluorophore-DNA barcode and bead-based
immunoassay for the detection of Avian Influenza Virus (AIV), a potential pandemic threat for human health and enormous economic losses. The detection strategy is based on the use of sandwich immunoassay and fluorophore-tagged oligonucleotides as representatively fluorescent barcodes. Despite its simplicity the assay has sensitivity comparable to RT-PCR amplification, and possesses a great potential as a rapid and sensitive on-chip detection format.
Resumo:
In order to address the increasing compromise of user privacy on mobile devices, a Fuzzy Logic based implicit authentication scheme is proposed in this paper. The proposed scheme computes an aggregate score based on selected features and a threshold in real-time based on current and historic data depicting user routine. The tuned fuzzy system is then applied to the aggregated score and the threshold to determine the trust level of the current user. The proposed fuzzy-integrated implicit authentication scheme is designed to: operate adaptively and completely in the background, require minimal training period, enable high system accuracy while provide timely detection of abnormal activity. In this paper, we explore Fuzzy Logic based authentication in depth. Gaussian and triangle-based membership functions are investigated and compared using real data over several weeks from different Android phone users. The presented results show that our proposed Fuzzy Logic approach is a highly effective, and viable scheme for lightweight real-time implicit authentication on mobile devices.
Resumo:
In order to protect user privacy on mobile devices, an event-driven implicit authentication scheme is proposed in this paper. Several methods of utilizing the scheme for recognizing legitimate user behavior are investigated. The investigated methods compute an aggregate score and a threshold in real-time to determine the trust level of the current user using real data derived from user interaction with the device. The proposed scheme is designed to: operate completely in the background, require minimal training period, enable high user recognition rate for implicit authentication, and prompt detection of abnormal activity that can be used to trigger explicitly authenticated access control. In this paper, we investigate threshold computation through standard deviation and EWMA (exponentially weighted moving average) based algorithms. The result of extensive experiments on user data collected over a period of several weeks from an Android phone indicates that our proposed approach is feasible and effective for lightweight real-time implicit authentication on mobile smartphones.
Resumo:
Objective
Global migration of healthcare workers places responsibility on employers to comply with legal employment rights whilst ensuring patient safety remains the central goal. We describe the pilot of a communication assessment designed for doctors who trained and communicated with patients and colleagues in a different language from that of the host country. It is unique in assessing clinical communication without assessing knowledge.
MethodsA 14-station OSCE was developed using a domain-based marking scheme, covering professional communication and English language skills (speaking, listening, reading and writing) in routine, acute and emotionally challenging contexts, with patients, carers and healthcare teams. Candidates (n = 43), non-UK trained volunteers applying to the UK Foundation Programme, were provided with relevant station information prior to the exam.
ResultsThe criteria for passing the test included achieving the pass score and passing 10 or more of the 14 stations. Of the 43 candidates, nine failed on the station criteria. Two failed the pass score and also the station criteria. The Cronbach's alpha coefficient was 0.866.
ConclusionThis pilot tested ‘proof of concept’ of a new domain-based communication assessment for non-UK trained doctors.
Practice implicationsThe test would enable employers and regulators to verify communication competence and safety in clinical contexts, independent of clinical knowledge, for doctors who trained in a language different from that of the host country.
Resumo:
Fully Homomorphic Encryption (FHE) is a recently developed cryptographic technique which allows computations on encrypted data. There are many interesting applications for this encryption method, especially within cloud computing. However, the computational complexity is such that it is not yet practical for real-time applications. This work proposes optimised hardware architectures of the encryption step of an integer-based FHE scheme with the aim of improving its practicality. A low-area design and a high-speed parallel design are proposed and implemented on a Xilinx Virtex-7 FPGA, targeting the available DSP slices, which offer high-speed multiplication and accumulation. Both use the Comba multiplication scheduling method to manage the large multiplications required with uneven sized multiplicands and to minimise the number of read and write operations to RAM. Results show that speed up factors of 3.6 and 10.4 can be achieved for the encryption step with medium-sized security parameters for the low-area and parallel designs respectively, compared to the benchmark software implementation on an Intel Core2 Duo E8400 platform running at 3 GHz.
Resumo:
Lattice-based cryptography has gained credence recently as a replacement for current public-key cryptosystems, due to its quantum-resilience, versatility, and relatively low key sizes. To date, encryption based on the learning with errors (LWE) problem has only been investigated from an ideal lattice standpoint, due to its computation and size efficiencies. However, a thorough investigation of standard lattices in practice has yet to be considered. Standard lattices may be preferred to ideal lattices due to their stronger security assumptions and less restrictive parameter selection process. In this paper, an area-optimised hardware architecture of a standard lattice-based cryptographic scheme is proposed. The design is implemented on a FPGA and it is found that both encryption and decryption fit comfortably on a Spartan-6 FPGA. This is the first hardware architecture for standard lattice-based cryptography reported in the literature to date, and thus is a benchmark for future implementations.
Additionally, a revised discrete Gaussian sampler is proposed which is the fastest of its type to date, and also is the first to investigate the cost savings of implementing with lamda_2-bits of precision. Performance results are promising in comparison to the hardware designs of the equivalent ring-LWE scheme, which in addition to providing a stronger security proof; generate 1272 encryptions per second and 4395 decryptions per second.
Resumo:
This paper presents a statistical-based fault diagnosis scheme for application to internal combustion engines. The scheme relies on an identified model that describes the relationships between a set of recorded engine variables using principal component analysis (PCA). Since combustion cycles are complex in nature and produce nonlinear relationships between the recorded engine variables, the paper proposes the use of nonlinear PCA (NLPCA). The paper further justifies the use of NLPCA by comparing the model accuracy of the NLPCA model with that of a linear PCA model. A new nonlinear variable reconstruction algorithm and bivariate scatter plots are proposed for fault isolation, following the application of NLPCA. The proposed technique allows the diagnosis of different fault types under steady-state operating conditions. More precisely, nonlinear variable reconstruction can remove the fault signature from the recorded engine data, which allows the identification and isolation of the root cause of abnormal engine behaviour. The paper shows that this can lead to (i) an enhanced identification of potential root causes of abnormal events and (ii) the masking of faulty sensor readings. The effectiveness of the enhanced NLPCA based monitoring scheme is illustrated by its application to a sensor fault and a process fault. The sensor fault relates to a drift in the fuel flow reading, whilst the process fault relates to a partial blockage of the intercooler. These faults are introduced to a Volkswagen TDI 1.9 Litre diesel engine mounted on an experimental engine test bench facility.
Resumo:
A fully homomorphic encryption (FHE) scheme is envisioned as a key cryptographic tool in building a secure and reliable cloud computing environment, as it allows arbitrary evaluation of a ciphertext without revealing the plaintext. However, existing FHE implementations remain impractical due to very high time and resource costs. To the authors’ knowledge, this paper presents the first hardware implementation of a full encryption primitive for FHE over the integers using FPGA technology. A large-integer multiplier architecture utilising Integer-FFT multiplication is proposed, and a large-integer Barrett modular reduction module is designed incorporating the proposed multiplier. The encryption primitive used in the integer-based FHE scheme is designed employing the proposed multiplier and modular reduction modules. The designs are verified using the Xilinx Virtex-7 FPGA platform. Experimental results show that a speed improvement factor of up to 44 is achievable for the hardware implementation of the FHE encryption scheme when compared to its corresponding software implementation. Moreover, performance analysis shows further speed improvements of the integer-based FHE encryption primitives may still be possible, for example through further optimisations or by targeting an ASIC platform.
Resumo:
Gate-tunable two-dimensional (2D) materials-based quantum capacitors (QCs) and van der Waals heterostructures involve tuning transport or optoelectronic characteristics by the field effect. Recent studies have attributed the observed gate-tunable characteristics to the change of the Fermi level in the first 2D layer adjacent to the dielectrics, whereas the penetration of the field effect through the one-molecule-thick material is often ignored or oversimplified. Here, we present a multiscale theoretical approach that combines first-principles electronic structure calculations and the Poisson–Boltzmann equation methods to model penetration of the field effect through graphene in a metal–oxide–graphene–semiconductor (MOGS) QC, including quantifying the degree of “transparency” for graphene two-dimensional electron gas (2DEG) to an electric displacement field. We find that the space charge density in the semiconductor layer can be modulated by gating in a nonlinear manner, forming an accumulation or inversion layer at the semiconductor/graphene interface. The degree of transparency is determined by the combined effect of graphene quantum capacitance and the semiconductor capacitance, which allows us to predict the ranking for a variety of monolayer 2D materials according to their transparency to an electric displacement field as follows: graphene > silicene > germanene > WS2 > WTe2 > WSe2 > MoS2 > phosphorene > MoSe2 > MoTe2, when the majority carrier is electron. Our findings reveal a general picture of operation modes and design rules for the 2D-materials-based QCs.
Resumo:
Two-dimensional (2D) hexagonal boron nitride (BN) nanosheets are excellent dielectric substrate for graphene, molybdenum disulfide, and many other 2D nanomaterial-based electronic and photonic devices. To optimize the performance of these 2D devices, it is essential to understand the dielectric screening properties of BN nanosheets as a function of the thickness. Here, electric force microscopy along with theoretical calculations based on both state-of-the-art first-principles calculations with van der Waals interactions under consideration, and nonlinear Thomas-Fermi theory models are used to investigate the dielectric screening in high-quality BN nanosheets of different thicknesses. It is found that atomically thin BN nanosheets are less effective in electric field screening, but the screening capability of BN shows a relatively weak dependence on the layer thickness.
Resumo:
Previous papers have noted the difficulty in obtaining neural models which are stable under simulation when trained using prediction-error-based methods. Here the differences between series-parallel and parallel identification structures for training neural models are investigated. The effect of the error surface shape on training convergence and simulation performance is analysed using a standard algorithm operating in both training modes. A combined series-parallel/parallel training scheme is proposed, aiming to provide a more effective means of obtaining accurate neural simulation models. Simulation examples show the combined scheme is advantageous in circumstances where the solution space is known or suspected to be complex. (c) 2006 Elsevier B.V. All rights reserved.