954 resultados para multi-factor authentication
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.
Resumo:
Digital signatures are often used by trusted authorities to make unique bindings between a subject and a digital object; for example, certificate authorities certify a public key belongs to a domain name, and time-stamping authorities certify that a certain piece of information existed at a certain time. Traditional digital signature schemes however impose no uniqueness conditions, so a trusted authority could make multiple certifications for the same subject but different objects, be it intentionally, by accident, or following a (legal or illegal) coercion. We propose the notion of a double-authentication-preventing signature, in which a value to be signed is split into two parts: a subject and a message. If a signer ever signs two different messages for the same subject, enough information is revealed to allow anyone to compute valid signatures on behalf of the signer. This double-signature forgeability property discourages signers from misbehaving---a form of self-enforcement---and would give binding authorities like CAs some cryptographic arguments to resist legal coercion. We give a generic construction using a new type of trapdoor functions with extractability properties, which we show can be instantiated using the group of sign-agnostic quadratic residues modulo a Blum integer.
Resumo:
In this paper we tackle the problem of finding an efficient signature verification scheme when the number of signatures is signi.- cantly large and the verifier is relatively weak. In particular, we tackle the problem of message authentication in many-to-one communication networks known as concast communication. The paper presents three signature screening algorithms for a variant of ElGamal-type digital signatures. The cost for these schemes is n applications of hash functions, 2n modular multiplications, and n modular additions plus the verification of one digital signature, where n is the number of signatures. The paper also presents a solution to the open problem of finding a fast screening signature for non-RSA digital signature schemes.
Resumo:
The aim of this study was to evaluate the factor structure of the Baby Eating Behaviour Questionnaire (BEBQ) in an Australian community sample of mother-infant dyads. A secondary aim was to explore the relationship between the BEBQ subscales and infant gender, weight and current feeding mode. Confirmatory factor analysis (CFA) utilising structural equation modelling examined the hypothesised 4-factor model of the BEBQ. Only mothers (N=467) who completed all items on the BEBQ (infant age: M=17 weeks, SD=3 weeks) were included in the analysis. The original 4-factor model did not provide an acceptable fit to the data due to poor performance of the Satiety responsiveness factor. Removal of this factor (3 items) resulted in a well-fitting 3-factor model. Cronbach’s α was acceptable for the Enjoyment of food (α=0.73), Food responsiveness (α=0.78) and Slowness in eating (α=0.68) subscales but low for the Satiety responsiveness (α=0.56) subscale. Enjoyment of food was associated with higher infant weight whereas Slowness in eating and Satiety responsiveness were both associated with lower infant weight. Differences on all four subscales as a function of feeding mode were observed. This study is the first to use CFA to evaluate the hypothesised factor structure of the BEBQ. Findings support further development work on the Satiety responsiveness subscale in particular, but confirm the utility of the Enjoyment of food, Food responsiveness and Slowness in eating subscales.
Resumo:
This paper presents a novel place recognition algorithm inspired by the recent discovery of overlapping and multi-scale spatial maps in the rodent brain. We mimic this hierarchical framework by training arrays of Support Vector Machines to recognize places at multiple spatial scales. Place match hypotheses are then cross-validated across all spatial scales, a process which combines the spatial specificity of the finest spatial map with the consensus provided by broader mapping scales. Experiments on three real-world datasets including a large robotics benchmark demonstrate that mapping over multiple scales uniformly improves place recognition performance over a single scale approach without sacrificing localization accuracy. We present analysis that illustrates how matching over multiple scales leads to better place recognition performance and discuss several promising areas for future investigation.
Resumo:
In this paper we introduce a new technique to obtain the slow-motion dynamics in nonequilibrium and singularly perturbed problems characterized by multiple scales. Our method is based on a straightforward asymptotic reduction of the order of the governing differential equation and leads to amplitude equations that describe the slowly-varying envelope variation of a uniformly valid asymptotic expansion. This may constitute a simpler and in certain cases a more general approach toward the derivation of asymptotic expansions, compared to other mainstream methods such as the method of Multiple Scales or Matched Asymptotic expansions because of its relation with the Renormalization Group. We illustrate our method with a number of singularly perturbed problems for ordinary and partial differential equations and recover certain results from the literature as special cases. © 2010 - IOS Press and the authors. All rights reserved.
Resumo:
A dynamic accumulator is an algorithm, which merges a large set of elements into a constant-size value such that for an element accumulated, there is a witness confirming that the element was included into the value, with a property that accumulated elements can be dynamically added and deleted into/from the original set. Recently Wang et al. presented a dynamic accumulator for batch updates at ICICS 2007. However, their construction suffers from two serious problems. We analyze them and propose a way to repair their scheme. We use the accumulator to construct a new scheme for common secure indices with conjunctive keyword-based retrieval.
Resumo:
A parallel authentication and public-key encryption is introduced and exemplified on joint encryption and signing which compares favorably with sequential Encrypt-then-Sign (ɛtS) or Sign-then-Encrypt (Stɛ) schemes as far as both efficiency and security are concerned. A security model for signcryption, and thus joint encryption and signing, has been recently defined which considers possible attacks and security goals. Such a scheme is considered secure if the encryption part guarantees indistinguishability and the signature part prevents existential forgeries, for outsider but also insider adversaries. We propose two schemes of parallel signcryption, which are efficient alternative to Commit-then-Sign-and- Encrypt (Ct&G3&S). They are both provably secure in the random oracle model. The first one, called generic parallel encrypt and sign, is secure if the encryption scheme is semantically secure against chosen-ciphertext attacks and the signature scheme prevents existential forgeries against random-message attacks. The second scheme, called optimal parallel encrypt. and sign, applies random oracles similar to the OAEP technique in order to achieve security using encryption and signature components with very weak security requirements — encryption is expected to be one-way under chosen-plaintext attacks while signature needs to be secure against universal forgeries under random-plaintext attack, that is actually the case for both the plain-RSA encryption and signature under the usual RSA assumption. Both proposals are generic in the sense that any suitable encryption and signature schemes (i.e. which simply achieve required security) can be used. Furthermore they allow both parallel encryption and signing, as well as parallel decryption and verification. Properties of parallel encrypt and sign schemes are considered and a new security standard for parallel signcryption is proposed.
Resumo:
We study the multicast stream authentication problem when an opponent can drop, reorder and inject data packets into the communication channel. In this context, bandwidth limitation and fast authentication are the core concerns. Therefore any authentication scheme is to reduce as much as possible the packet overhead and the time spent at the receiver to check the authenticity of collected elements. Recently, Tartary and Wang developed a provably secure protocol with small packet overhead and a reduced number of signature verifications to be performed at the receiver. In this paper, we propose an hybrid scheme based on Tartary and Wang’s approach and Merkle hash trees. Our construction will exhibit a smaller overhead and a much faster processing at the receiver making it even more suitable for multicast than the earlier approach. As Tartary and Wang’s protocol, our construction is provably secure and allows the total recovery of the data stream despite erasures and injections occurred during transmission.
Resumo:
We report on the comparative study of magnetotransport properties of large-area vertical few-layer graphene networks with different morphologies, measured in a strong (up to 10 T) magnetic field over a wide temperature range. The petal-like and tree-like graphene networks grown by a plasma enhanced CVD process on a thin (500 nm) silicon oxide layer supported by a silicon wafer demonstrate a significant difference in the resistance-magnetic field dependencies at temperatures ranging from 2 to 200 K. This behaviour is explained in terms of the effect of electron scattering at ultra-long reactive edges and ultra-dense boundaries of the graphene nanowalls. Our results pave a way towards three-dimensional vertical graphene-based magnetoelectronic nanodevices with morphology-tuneable anisotropic magnetic properties. © The Royal Society of Chemistry 2013.
Resumo:
Hepatocellular carcinoma (HCC) is one of the primary hepatic malignancies and is the third most common cause of cancer related death worldwide. Although a wealth of knowledge has been gained concerning the initiation and progression of HCC over the last half century, efforts to improve our understanding of its pathogenesis at a molecular level are still greatly needed, to enable clinicians to enhance the standards of the current diagnosis and treatment of HCC. In the post-genome era, advanced mass spectrometry driven multi-omics technologies (e.g., profiling of DNA damage adducts, RNA modification profiling, proteomics, and metabolomics) stand at the interface between chemistry and biology, and have yielded valuable outcomes from the study of a diversity of complicated diseases. Particularly, these technologies are being broadly used to dissect various biological aspects of HCC with the purpose of biomarker discovery, interrogating pathogenesis as well as for therapeutic discovery. This proof of knowledge-based critical review aims at exploring the selected applications of those defined omics technologies in the HCC niche with an emphasis on translational applications driven by advanced mass spectrometry, toward the specific clinical use for HCC patients. This approach will enable the biomedical community, through both basic research and the clinical sciences, to enhance the applicability of mass spectrometry-based omics technologies in dissecting the pathogenesis of HCC and could lead to novel therapeutic discoveries for HCC.
Resumo:
This paper describes research investigating expertise and the types of knowledge used by airport security screeners. It applies a multi method approach incorporating eye tracking, concurrent verbal protocol and interviews. Results show that novice and expert security screeners primarily access perceptual knowledge and experience little difficulty during routine situations. During non-routine situations however, experience was found to be a determining factor for effective interactions and problem solving. Experts were found to use strategic knowledge and demonstrated structured use of interface functions integrated into efficient problem solving sequences. Comparatively, novices experienced more knowledge limitations and uncertainty resulting in interaction breakdowns. These breakdowns were characterised by trial and error interaction sequences. This research suggests that the quality of knowledge security screeners have access to has implications on visual and physical interface interactions and their integration into problem solving sequences. Implications and recommendations for the design of interfaces used in the airport security screening context are discussed. The motivations of recommendations are to improve the integration of interactions into problem solving sequences, encourage development of problem scheme knowledge and to support the skills and knowledge of the personnel that interact with security screening systems.
Resumo:
Recently, a variety high-aspect-ratio nanostructures have been grown and profiled for various applications ranging from field emission transistors to gene/drug delivery devices. However, fabricating and processing arrays of these structures and determining how changing certain physical parameters affects the final outcome is quite challenging. We have developed several modules that can be used to simulate the processes of various physical vapour deposition systems from precursor interaction in the gas phase to gas-surface interactions and surface processes. In this paper, multi-scale hybrid numerical simulations are used to study how low-temperature non-equilibrium plasmas can be employed in the processing of high-aspect-ratio structures such that the resulting nanostructures have properties suitable for their eventual device application. We show that whilst using plasma techniques is beneficial in many nanofabrication processes, it is especially useful in making dense arrays of high-aspect-ratio nanostructures.
Resumo:
Multi-party key agreement protocols indirectly assume that each principal equally contributes to the final form of the key. In this paper we consider three malleability attacks on multi-party key agreement protocols. The first attack, called strong key control allows a dishonest principal (or a group of principals) to fix the key to a pre-set value. The second attack is weak key control in which the key is still random, but the set from which the key is drawn is much smaller than expected. The third attack is named selective key control in which a dishonest principal (or a group of dishonest principals) is able to remove a contribution of honest principals to the group key. The paper discusses the above three attacks on several key agreement protocols, including DH (Diffie-Hellman), BD (Burmester-Desmedt) and JV (Just-Vaudenay). We show that dishonest principals in all three protocols can weakly control the key, and the only protocol which does not allow for strong key control is the DH protocol. The BD and JV protocols permit to modify the group key by any pair of neighboring principals. This modification remains undetected by honest principals.