826 resultados para 2D barcode based authentication scheme
Resumo:
Financial information is extremely sensitive. Hence, electronic banking must provide a robust system to authenticate its customers and let them access their data remotely. On the other hand, such system must be usable, affordable, and portable.We propose a challengeresponse based one-time password (OTP) scheme that uses symmetriccryptography in combination with a hardware security module. The proposed protocol safeguards passwords from keyloggers and phishing attacks.Besides, this solution provides convenient mobility for users who want to bank online anytime and anywhere, not just from their owntrusted computers.
Resumo:
Three new bimetallic oxamato-based magnets with the proligand 4,5-dimethyl-1,2-phenylenebis-(oxamato) (dmopba) were synthesized using water or dimethylsulfoxide (DMSO) as solvents. Single crystal X-ray diffraction provided structures for two of them: [MnCu(dmopba)(H(2)O)(3)]n center dot 4nH(2)O (1) and [MnCu(dmopba)(DMSO)(3)](n center dot)nDMSO (2). The crystalline structures for both 1 and 2 consist of linearly ordered oxamato-bridged Mn(II)Cu(II) bimetallic chains. The magnetic characterization revealed a typical behaviour of ferrimagnetic chains for 1 and 2. Least-squares fits of the experimental magnetic data performed in the 300-20 K temperature range led to J(MnCu) = -27.9 cm(-1), g(Cu) = 2.09 and g(Mn) = 1.98 for 1 and J(MnCu) = -30.5 cm(-1), g(Cu) = 2.09 and g(Mn) = 2.02 for 2 (H = -J(MnCu)Sigma S(Mn, i)(S(Cu, i) + S(Cu, i-1))). The two-dimensional ferrimagnetic system [Me(4)N](2n){Co(2)[Cu(dmopba)](3)}center dot 4nDMSO center dot nH(2)O (3) was prepared by reaction of Co(II) ions and an excess of [Cu(dmopba)](2-) in DMSO. The study of the temperature dependence of the magnetic susceptibility as well as the temperature and field dependences of the magnetization revealed a cluster glass-like behaviour for 3.
Resumo:
In this paper a new PCA-based positioning sensor and localization system for mobile robots to operate in unstructured environments (e. g. industry, services, domestic ...) is proposed and experimentally validated. The inexpensive positioning system resorts to principal component analysis (PCA) of images acquired by a video camera installed onboard, looking upwards to the ceiling. This solution has the advantage of avoiding the need of selecting and extracting features. The principal components of the acquired images are compared with previously registered images, stored in a reduced onboard image database, and the position measured is fused with odometry data. The optimal estimates of position and slippage are provided by Kalman filters, with global stable error dynamics. The experimental validation reported in this work focuses on the results of a set of experiments carried out in a real environment, where the robot travels along a lawn-mower trajectory. A small position error estimate with bounded co-variance was always observed, for arbitrarily long experiments, and slippage was estimated accurately in real time.
Resumo:
An online scheme to assign Stenotrophomonas isolates to genomic groups was developed using the multilocus sequence analysis (MLSA), which is based on the DNA sequencing of selected fragments of the housekeeping genes ATP synthase alpha subunit (atpA), the recombination repair protein (recA), the RNA polymerase alpha subunit (rpoA) and the excision repair beta subunit (uvrB). This MLSA-based scheme was validated using eight of the 10 Stenotrophomonas species that have been previously described. The environmental and nosocomial Stenotrophomonas strains were characterised using MLSA, 16S rRNA sequencing and DNA-DNA hybridisation (DDH) analyses. Strains of the same species were found to have greater than 95% concatenated sequence similarity and specific strains formed cohesive readily recognisable phylogenetic groups. Therefore, MLSA appeared to be an effective alternative methodology to amplified fragment length polymorphism fingerprint and DDH techniques. Strains of Stenotrophomonas can be readily assigned through the open database resource that was developed in the current study (www.steno.lncc.br/).
Resumo:
IoT consists of essentially thousands of tiny sensor nodes interconnected to the internet, each one of which executes the programmed functions under memory and power limita- tions. The sensor nodes are distributed mainly for gathering data in various situations. IoT envisions the future technologies such as e-health, smart city, auto-mobiles automa- tion, construction sites automation, and smart home. Secure communication of data under memory and energy constraints is major challenge in IoT. Authentication is the first and important phase of secure communication. This study presents a protocol to authenticate resource constraint devices in physical proximity by solely using the shared wireless communication interfaces. This model of authentication only relies on the abundance of ambient radio signals to authenticate in less than a second. To evaluate the designed protocol, SkyMotes are emulated in a network environment simulated by Contiki/COOJA. Results presented during this study proves that this approach is immune against passive and active attacks. An adversary located as near as two meters can be identified in less than a second with minimal expense of energy. Since, only radio device is used as required hardware for the authentication, this technique is scalable and interoperable to heterogeneous nature of IoT.
Resumo:
Clustering schemes improve energy efficiency of wireless sensor networks. The inclusion of mobility as a new criterion for the cluster creation and maintenance adds new challenges for these clustering schemes. Cluster formation and cluster head selection is done on a stochastic basis for most of the algorithms. In this paper we introduce a cluster formation and routing algorithm based on a mobility factor. The proposed algorithm is compared with LEACH-M protocol based on metrics viz. number of cluster head transitions, average residual energy, number of alive nodes and number of messages lost
Resumo:
This paper proposes a novel method of authentication of users in secure buildings. The main objective is to investigate whether user actions in the built environment can produce consistent behavioural signatures upon which a building intrusion detection system could be based. In the process three behavioural expressions were discovered: time-invariant, co-dependent and idiosyncratic.
Resumo:
[EN]We present a new method, based on the idea of the meccano method and a novel T-mesh optimization procedure, to construct a T-spline parameterization of 2D geometries for the application of isogeometric analysis. The proposed method only demands a boundary representation of the geometry as input data. The algorithm obtains, as a result, high quality parametric transformation between 2D objects and the parametric domain, the unit square. First, we define a parametric mapping between the input boundary of the object and the boundary of the parametric domain. Then, we build a T-mesh adapted to the geometric singularities of the domain in order to preserve the features of the object boundary with a desired tolerance…
Resumo:
Acoplamiento del sistema informático de control de piso de producción (SFS) con el conjunto de equipos de fabricación (SPE) es una tarea compleja. Tal acoplamiento involucra estándares abiertos y propietarios, tecnologías de información y comunicación, entre otras herramientas y técnicas. Debido a la turbulencia de mercados, ya sea soluciones personalizadas o soluciones basadas en estándares eventualmente requieren un esfuerzo considerable de adaptación. El concepto de acoplamiento débil ha sido identificado en la comunidad de diseño organizacional como soporte para la sobrevivencia de la organización. Su presencia reduce la resistencia de la organización a cambios en el ambiente. En este artículo los resultados obtenidos por la comunidad de diseño organizacional son identificados, traducidos y organizados para apoyar en la solución del problema de integración SFS-SPE. Un modelo clásico de acoplamiento débil, desarrollado por la comunidad de estudios de diseño organizacional, es resumido y trasladado al área de interés. Los aspectos claves son identificados para utilizarse como promotores del acoplamiento débil entre SFS-SPE, y presentados en forma de esquema de referencia. Así mismo, este esquema de referencia es presentado como base para el diseño e implementación de una solución genérica de acoplamiento o marco de trabajo (framework) de acoplamiento, a incluir como etapa de acoplamiento débil entre SFS y SPE. Un ejemplo de validación con varios conjuntos de equipos de fabricación, usando diferentes medios físicos de comunicación, comandos de controlador, lenguajes de programación de equipos y protocolos de comunicación es presentado, mostrando un nivel aceptable de autonomía del SFS. = Coupling shop floor software system (SFS) with the set of production equipment (SPE) becomes a complex task. It involves open and proprietary standards, information and communication technologies among other tools and techniques. Due to market turbulence, either custom solutions or standards based solutions eventually require a considerable effort of adaptation. Loose coupling concept has been identified in the organizational design community as a compensator for organization survival. Its presence reduces organization reaction to environment changes. In this paper the results obtained by the organizational de sign community are identified, translated and organized to support the SFS-SPE integration problem solution. A classical loose coupling model developed by organizational studies community is abstracted and translated to the area of interest. Key aspects are identified to be used as promoters of SFS-SPE loose coupling and presented in a form of a reference scheme. Furthermore, this reference scheme is proposed here as a basis for the design and implementation of a generic coupling solution or coupling framework, that is included as a loose coupling stage between SFS and SPE. A validation example with various sets of manufacturing equipment, using different physical communication media, controller commands, programming languages and wire protocols is presented, showing an acceptable level of autonomy gained by the SFS.
Resumo:
Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.
Resumo:
Computer networks produce tremendous amounts of event-based data that can be collected and managed to support an increasing number of new classes of pervasive applications. Examples of such applications are network monitoring and crisis management. Although the problem of distributed event-based management has been addressed in the non-pervasive settings such as the Internet, the domain of pervasive networks has its own characteristics that make these results non-applicable. Many of these applications are based on time-series data that possess the form of time-ordered series of events. Such applications also embody the need to handle large volumes of unexpected events, often modified on-the-fly, containing conflicting information, and dealing with rapidly changing contexts while producing results with low-latency. Correlating events across contextual dimensions holds the key to expanding the capabilities and improving the performance of these applications. This dissertation addresses this critical challenge. It establishes an effective scheme for complex-event semantic correlation. The scheme examines epistemic uncertainty in computer networks by fusing event synchronization concepts with belief theory. Because of the distributed nature of the event detection, time-delays are considered. Events are no longer instantaneous, but duration is associated with them. Existing algorithms for synchronizing time are split into two classes, one of which is asserted to provide a faster means for converging time and hence better suited for pervasive network management. Besides the temporal dimension, the scheme considers imprecision and uncertainty when an event is detected. A belief value is therefore associated with the semantics and the detection of composite events. This belief value is generated by a consensus among participating entities in a computer network. The scheme taps into in-network processing capabilities of pervasive computer networks and can withstand missing or conflicting information gathered from multiple participating entities. Thus, this dissertation advances knowledge in the field of network management by facilitating the full utilization of characteristics offered by pervasive, distributed and wireless technologies in contemporary and future computer networks.
Resumo:
Presently different audio watermarking methods are available; most of them inclined towards copyright protection and copy protection. This is the key motive for the notion to develop a speaker verification scheme that guar- antees non-repudiation services and the thesis is its outcome. The research presented in this thesis scrutinizes the field of audio water- marking and the outcome is a speaker verification scheme that is proficient in addressing issues allied to non-repudiation to a great extent. This work aimed in developing novel audio watermarking schemes utilizing the fun- damental ideas of Fast-Fourier Transform (FFT) or Fast Walsh-Hadamard Transform (FWHT). The Mel-Frequency Cepstral Coefficients (MFCC) the best parametric representation of the acoustic signals along with few other key acoustic characteristics is employed in crafting of new schemes. The au- dio watermark created is entirely dependent to the acoustic features, hence named as FeatureMark and is crucial in this work. In any watermarking scheme, the quality of the extracted watermark de- pends exclusively on the pre-processing action and in this work framing and windowing techniques are involved. The theme non-repudiation provides immense significance in the audio watermarking schemes proposed in this work. Modification of the signal spectrum is achieved in a variety of ways by selecting appropriate FFT/FWHT coefficients and the watermarking schemes were evaluated for imperceptibility, robustness and capacity char- acteristics. The proposed schemes are unequivocally effective in terms of maintaining the sound quality, retrieving the embedded FeatureMark and in terms of the capacity to hold the mark bits. Robust nature of these marking schemes is achieved with the help of syn- chronization codes such as Barker Code with FFT based FeatureMarking scheme and Walsh Code with FWHT based FeatureMarking scheme. An- other important feature associated with this scheme is the employment of an encryption scheme towards the preparation of its FeatureMark that scrambles the signal features that helps to keep the signal features unreve- laed. A comparative study with the existing watermarking schemes and the ex- periments to evaluate imperceptibility, robustness and capacity tests guar- antee that the proposed schemes can be baselined as efficient audio water- marking schemes. The four new digital audio watermarking algorithms in terms of their performance are remarkable thereby opening more opportu- nities for further research.
Resumo:
In order to address the increasing compromise of user privacy on mobile devices, a Fuzzy Logic based implicit authentication scheme is proposed in this paper. The proposed scheme computes an aggregate score based on selected features and a threshold in real-time based on current and historic data depicting user routine. The tuned fuzzy system is then applied to the aggregated score and the threshold to determine the trust level of the current user. The proposed fuzzy-integrated implicit authentication scheme is designed to: operate adaptively and completely in the background, require minimal training period, enable high system accuracy while provide timely detection of abnormal activity. In this paper, we explore Fuzzy Logic based authentication in depth. Gaussian and triangle-based membership functions are investigated and compared using real data over several weeks from different Android phone users. The presented results show that our proposed Fuzzy Logic approach is a highly effective, and viable scheme for lightweight real-time implicit authentication on mobile devices.
Resumo:
Biometrics deals with the physiological and behavioral characteristics of an individual to establish identity. Fingerprint based authentication is the most advanced biometric authentication technology. The minutiae based fingerprint identification method offer reasonable identification rate. The feature minutiae map consists of about 70-100 minutia points and matching accuracy is dropping down while the size of database is growing up. Hence it is inevitable to make the size of the fingerprint feature code to be as smaller as possible so that identification may be much easier. In this research, a novel global singularity based fingerprint representation is proposed. Fingerprint baseline, which is the line between distal and intermediate phalangeal joint line in the fingerprint, is taken as the reference line. A polygon is formed with the singularities and the fingerprint baseline. The feature vectors are the polygonal angle, sides, area, type and the ridge counts in between the singularities. 100% recognition rate is achieved in this method. The method is compared with the conventional minutiae based recognition method in terms of computation time, receiver operator characteristics (ROC) and the feature vector length. Speech is a behavioural biometric modality and can be used for identification of a speaker. In this work, MFCC of text dependant speeches are computed and clustered using k-means algorithm. A backpropagation based Artificial Neural Network is trained to identify the clustered speech code. The performance of the neural network classifier is compared with the VQ based Euclidean minimum classifier. Biometric systems that use a single modality are usually affected by problems like noisy sensor data, non-universality and/or lack of distinctiveness of the biometric trait, unacceptable error rates, and spoof attacks. Multifinger feature level fusion based fingerprint recognition is developed and the performances are measured in terms of the ROC curve. Score level fusion of fingerprint and speech based recognition system is done and 100% accuracy is achieved for a considerable range of matching threshold