991 resultados para Bi-modal authentication
Resumo:
Most current computer systems authorise the user at the start of a session and do not detect whether the current user is still the initial authorised user, a substitute user, or an intruder pretending to be a valid user. Therefore, a system that continuously checks the identity of the user throughout the session is necessary without being intrusive to end-user and/or effectively doing this. Such a system is called a continuous authentication system (CAS). Researchers have applied several approaches for CAS and most of these techniques are based on biometrics. These continuous biometric authentication systems (CBAS) are supplied by user traits and characteristics. One of the main types of biometric is keystroke dynamics which has been widely tried and accepted for providing continuous user authentication. Keystroke dynamics is appealing for many reasons. First, it is less obtrusive, since users will be typing on the computer keyboard anyway. Second, it does not require extra hardware. Finally, keystroke dynamics will be available after the authentication step at the start of the computer session. Currently, there is insufficient research in the CBAS with keystroke dynamics field. To date, most of the existing schemes ignore the continuous authentication scenarios which might affect their practicality in different real world applications. Also, the contemporary CBAS with keystroke dynamics approaches use characters sequences as features that are representative of user typing behavior but their selected features criteria do not guarantee features with strong statistical significance which may cause less accurate statistical user-representation. Furthermore, their selected features do not inherently incorporate user typing behavior. Finally, the existing CBAS that are based on keystroke dynamics are typically dependent on pre-defined user-typing models for continuous authentication. This dependency restricts the systems to authenticate only known users whose typing samples are modelled. This research addresses the previous limitations associated with the existing CBAS schemes by developing a generic model to better identify and understand the characteristics and requirements of each type of CBAS and continuous authentication scenario. Also, the research proposes four statistical-based feature selection techniques that have highest statistical significance and encompasses different user typing behaviors which represent user typing patterns effectively. Finally, the research proposes the user-independent threshold approach that is able to authenticate a user accurately without needing any predefined user typing model a-priori. Also, we enhance the technique to detect the impostor or intruder who may take over during the entire computer session.
Resumo:
In most visual mapping applications suited to Autonomous Underwater Vehicles (AUVs), stereo visual odometry (VO) is rarely utilised as a pose estimator as imagery is typically of very low framerate due to energy conservation and data storage requirements. This adversely affects the robustness of a vision-based pose estimator and its ability to generate a smooth trajectory. This paper presents a novel VO pipeline for low-overlap imagery from an AUV that utilises constrained motion and integrates magnetometer data in a bi-objective bundle adjustment stage to achieve low-drift pose estimates over large trajectories. We analyse the performance of a standard stereo VO algorithm and compare the results to the modified vo algorithm. Results are demonstrated in a virtual environment in addition to low-overlap imagery gathered from an AUV. The modified VO algorithm shows significantly improved pose accuracy and performance over trajectories of more than 300m. In addition, dense 3D meshes generated from the visual odometry pipeline are presented as a qualitative output of the solution.
Resumo:
Bi-2212 tapes are prepared by a combination of dip-coating and partial melt processing. We investigate the effect of re-melting of those tapes by partial melting followed by slow cooling on the structure and superconducting properties. Microstructural studies of re-melted samples show that they have the same overall composition as partially melted tapes. However, the fractional volumes of the secondary phases differ and the amounts and distribution of the secondary phases have a significant effect on the critical current. Critical current of Bi-2212/Ag tapes strongly depends on the maximum processing temperature. Initial J(c)'s of the tapes, which are partially melted, then slowly solidified at optimum conditions and finally post-annealed in an inert atmosphere, are up to 10.4 x 10(3) A/cm(2). It is found that the maximum processing temperature at initial partial melting has an influence on the optimum re-heat treatment conditions for the tapes. Re-melted tapes processed at optimum conditions recover superconducting properties after post-annealing in an inert atmosphere: the J(c) values of the tapes are about 80-110% of initial J(c)'s of those tapes.
Resumo:
Superconducting composite Bi-2212/Ag tapes and their joints are fabricated by a combination of dip-coating and partial melt processing. The heat treated tapes have a critical current (Ic) between 8 and 26A, depending on tape thickness and the number of Bi-2212 layers. Current transmissions between 80% and 100% have been achieved through the joints of tapes. Different types of HTS joints of Bi-2212/Ag laminated tapes are made and their transport properties during winding operations are investigated. Irreversible strain values (ε irrev) for laminated tapes and their joints are determined and it is found that the degradation of Ic during tape bending depends on the type of joint.
Resumo:
Different types of HTS joints of Bi-2212/Ag tapes and laminates, which are fabricated by dip-coating and partial-melt processes, have been investigated. All joints are prepared using green single and laminated tapes and according to the scheme: coating-joining-processing. The heat treated tapes have critical current (Ic) between 7 and 27 A, depending on tape thickness and the number of Bi-2212 ceramic layers in laminated tapes. It is found that the current transport properties of joints depend on the type of laminate, joint configuration and joint treatment, Ic losses in joints of Bi-2212 tapes and laminates are attributed to defects in their structure, such as pores, secondary phases and misalignment of Bi-2212 grains near the Ag edges. By optimizing joint configuration, current transmission up to 100% is achieved for both single tapes and laminated tapes.
Resumo:
Superconducting Bi-2212 tapes and laminates are fabricated by a combination of dip-coating and partial melt processing. The heat treated tapes have critical current densities (Jc) up to 11 kAcm -2. We investigate the degradation of critical current (Ic) during bending experiments for both single tapes and tapes with laminate structure. Although degradation of Ic is observed in both forms, the characteristics of the degradation differ. It is determined that laminated tapes perform better than single tapes when critical current is measured against bending radius, and laminated tapes tolerate a higher strain for a given reduction in critical current. It is found that increasing the number of Bi-2212 layers increases the total Ic of the laminated tape, but degradation of critical current is more pronounced during bending because of the increased total thickness of the laminate structure. It is also found that addition of silver to the Bi-2212 layers reduces critical current degradation during bending for both tapes and laminates.
Resumo:
Superconducting thick films of Bi2Sr2CaCu2Oy (Bi-2212) on single-crystalline (100) MgO substrates have been prepared using a doctor-blade technique and a partial-melt process. It is found that the phase composition and the amount of Ag addition to the paste affect the structure and superconducting properties of the partially melted thick films. The optimum heat treatment schedule for obtaining high Jc has been determined for each paste. The heat treatment ensures attainment of high purity for the crystalline Bi-2212 phase and high orientation of Bi-2212 crystals, in which the c-axis is perpendicular to the substrate. The highest Tc, obtained by resistivity measurement, is 92.2 K. The best value for Jct (transport) of these thick films, measured at 77 K in self-field, is 8 × 10 3 Acm -2.
Resumo:
The structure and composition of reaction products between Bi-Sr-Ca-Cu-oxide (BSCCO) thick films and alumina substrates have been characterized using a combination of electron diffraction, scanning electron microscopy and energy dispersive X-ray spectrometry (EDX). Sr and Ca are found to be the most reactive cations with alumina. Sr4Al6O12SO4 is formed between the alumina substrates and BSCCO thick films prepared from paste with composition close to Bi-2212 (and Bi-2212 + 10 wt.% Ag). For paste with composition close to Bi(Pb)-2223 + 20 wt.% Ag, a new phase with f.c.c. structure, lattice parameter about a = 24.5 A and approximate composition Al3Sr2CaBi2CuOx has been identified in the interface region. Understanding and control of these reactions is essential for growth of high quality BSCCO thick films on alumina. (C) 1997 Elsevier Science S.A.
Resumo:
The microstructure of Bi-Sr-Ca-Cu-oxide (BSCCO) thick films on alumina substrates has been characterized using a combination of X-ray diffractometry, scanning electron microscopy, transmission electron microscopy of sections across the film/substrate interface and energy-dispersive X-ray spectrometry. A reaction layer formed between the BSCCO films and the alumina substrates. This chemical interaction is largely responsible for off-stoichiometry of the films and is more significant after partial melting of the films. A new phase with fee structure, lattice parameter a = 2.45 nm and approximate composition Al3Sr2CaBi2CuOx has been identified as reaction product between BSCCO and Al2O3.
Resumo:
We blend research from human-computer interface (HCI) design with computational based crypto- graphic provable security. We explore the notion of practice-oriented provable security (POPS), moving the focus to a higher level of abstraction (POPS+) for use in providing provable security for security ceremonies involving humans. In doing so we high- light some challenges and paradigm shifts required to achieve meaningful provable security for a protocol which includes a human. We move the focus of security ceremonies from being protocols in their context of use, to the protocols being cryptographic building blocks in a higher level protocol (the security cere- mony), which POPS can be applied to. In order to illustrate the need for our approach, we analyse both a protocol proven secure in theory, and a similar proto- col implemented by a �nancial institution, from both HCI and cryptographic perspectives.
Resumo:
Security of RFID authentication protocols has received considerable interest recently. However, an important aspect of such protocols that has not received as much attention is the efficiency of their communication. In this paper we investigate the efficiency benefits of pre-computation for time-constrained applications in small to medium RFID networks. We also outline a protocol utilizing this mechanism in order to demonstrate the benefits and drawbacks of using thisapproach. The proposed protocol shows promising results as it is able to offer the security of untraceableprotocols whilst only requiring the time comparable to that of more efficient but traceable protocols.
Resumo:
We introduce a lightweight biometric solution for user authentication over networks using online handwritten signatures. The algorithm proposed is based on a modified Hausdorff distance and has favorable characteristics such as low computational cost and minimal training requirements. Furthermore, we investigate an information theoretic model for capacity and performance analysis for biometric authentication which brings additional theoretical insights to the problem. A fully functional proof-of-concept prototype that relies on commonly available off-the-shelf hardware is developed as a client-server system that supports Web services. Initial experimental results show that the algorithm performs well despite its low computational requirements and is resilient against over-the-shoulder attacks.
Resumo:
Secure communications in distributed Wireless Sensor Networks (WSN) operating under adversarial conditions necessitate efficient key management schemes. In the absence of a priori knowledge of post-deployment network configuration and due to limited resources at sensor nodes, key management schemes cannot be based on post-deployment computations. Instead, a list of keys, called a key-chain, is distributed to each sensor node before the deployment. For secure communication, either two nodes should have a key in common in their key-chains, or they should establish a key through a secure-path on which every link is secured with a key. We first provide a comparative survey of well known key management solutions for WSN. Probabilistic, deterministic and hybrid key management solutions are presented, and they are compared based on their security properties and re-source usage. We provide a taxonomy of solutions, and identify trade-offs in them to conclude that there is no one size-fits-all solution. Second, we design and analyze deterministic and hybrid techniques to distribute pair-wise keys to sensor nodes before the deployment. We present novel deterministic and hybrid approaches based on combinatorial design theory and graph theory for deciding how many and which keys to assign to each key-chain before the sensor network deployment. Performance and security of the proposed schemes are studied both analytically and computationally. Third, we address the key establishment problem in WSN which requires key agreement algorithms without authentication are executed over a secure-path. The length of the secure-path impacts the power consumption and the initialization delay for a WSN before it becomes operational. We formulate the key establishment problem as a constrained bi-objective optimization problem, break it into two sub-problems, and show that they are both NP-Hard and MAX-SNP-Hard. Having established inapproximability results, we focus on addressing the authentication problem that prevents key agreement algorithms to be used directly over a wireless link. We present a fully distributed algorithm where each pair of nodes can establish a key with authentication by using their neighbors as the witnesses.
Resumo:
Traffic congestion has a significant impact on the economy and environment. Encouraging the use of multimodal transport (public transport, bicycle, park’n’ride, etc.) has been identified by traffic operators as a good strategy to tackle congestion issues and its detrimental environmental impacts. A multi-modal and multi-objective trip planner provides users with various multi-modal options optimised on objectives that they prefer (cheapest, fastest, safest, etc) and has a potential to reduce congestion on both a temporal and spatial scale. The computation of multi-modal and multi-objective trips is a complicated mathematical problem, as it must integrate and utilize a diverse range of large data sets, including both road network information and public transport schedules, as well as optimising for a number of competing objectives, where fully optimising for one objective, such as travel time, can adversely affect other objectives, such as cost. The relationship between these objectives can also be quite subjective, as their priorities will vary from user to user. This paper will first outline the various data requirements and formats that are needed for the multi-modal multi-objective trip planner to operate, including static information about the physical infrastructure within Brisbane as well as real-time and historical data to predict traffic flow on the road network and the status of public transport. It will then present information on the graph data structures representing the road and public transport networks within Brisbane that are used in the trip planner to calculate optimal routes. This will allow for an investigation into the various shortest path algorithms that have been researched over the last few decades, and provide a foundation for the construction of the Multi-modal Multi-objective Trip Planner by the development of innovative new algorithms that can operate the large diverse data sets and competing objectives.
Resumo:
The use of Wireless Sensor Networks (WSNs) for Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data synchronization error and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. This paper first presents a brief review of the most inherent uncertainties of the SHM-oriented WSN platforms and then investigates their effects on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when employing merged data from multiple tests. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and Data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Experimental accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as clean data before being contaminated by different data pollutants in sequential manner to simulate practical SHM-oriented WSN uncertainties. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with SHM-WSN uncertainties. Finally, the use of the measurement channel projection for the time-domain OMA techniques and the preferred combination of the OMA techniques to cope with the SHM-WSN uncertainties is recommended.