826 resultados para 2D barcode based authentication scheme


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Behavioral biometrics is one of the areas with growing interest within the biosignal research community. A recent trend in the field is ECG-based biometrics, where electrocardiographic (ECG) signals are used as input to the biometric system. Previous work has shown this to be a promising trait, with the potential to serve as a good complement to other existing, and already more established modalities, due to its intrinsic characteristics. In this paper, we propose a system for ECG biometrics centered on signals acquired at the subject's hand. Our work is based on a previously developed custom, non-intrusive sensing apparatus for data acquisition at the hands, and involved the pre-processing of the ECG signals, and evaluation of two classification approaches targeted at real-time or near real-time applications. Preliminary results show that this system leads to competitive results both for authentication and identification, and further validate the potential of ECG signals as a complementary modality in the toolbox of the biometric system designer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia do Ambiente, perfil Gestão de Sistemas Ambientais

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabalho de Projeto realizado para obtenção do grau de Mestre em Engenharia Informática e de Computadores

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Studying changes in brain activation according to the valence of emotion-inducing stimuli is essential in the research on emotions. Due to the ecological potential of virtual reality, it is also important to examine whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images. This study uses functional Magnetic Resonance Imaging to compare differences between 3D and standard (2D) visual stimuli in the activation of emotion-related brain areas. The stimuli were organized in three virtual-reality scenarios, each with a different emotional valence (pleasant, unpleasant and neutral). The scenarios were presented in a pseudo-randomized order in the two visualization modes to twelve healthy males. Data were analyzed through a GLM-based fixed effects procedure. Unpleasant and neutral stimuli activated the right amygdala more strongly when presented in 3D than in 2D. These results suggest that 3D stimuli, when used as “building blocks” for virtual environments, can induce increased emotional loading, as shown here through neuroimaging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on the design and development of an Android-based context-aware system to support Erasmus students during their mobility in Porto. It enables: (i) guest users to create, rate and store personal points of interest (POI) in a private, local on board database; and (ii) authenticated users to upload and share POI as well as get and rate recommended POI from the shared central database. The system is a distributed client / server application. The server interacts with a central database that maintains the user profiles and the shared POI organized by category and rating. The Android GUI application works both as a standalone application and as a client module. In standalone mode, guest users have access to generic info, a map-based interface and a local database to store and retrieve personal POI. Upon successful authentication, users can, additionally, share POI as well as get and rate recommendations sorted by category, rating and distance-to-user.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functionally graded composite materials can provide continuously varying properties, which distribution can vary according to a specific location within the composite. More frequently, functionally graded materials consider a through thickness variation law, which can be more or less smoother, possessing however an important characteristic which is the continuous properties variation profiles, which eliminate the abrupt stresses discontinuities found on laminated composites. This study aims to analyze the transient dynamic behavior of sandwich structures, having a metallic core and functionally graded outer layers. To this purpose, the properties of the particulate composite metal-ceramic outer layers, are estimated using Mod-Tanaka scheme and the dynamic analyses considers first order and higher order shear deformation theories implemented though kriging finite element method. The transient dynamic response of these structures is carried out through Bossak-Newmark method. The illustrative cases presented in this work, consider the influence of the shape functions interpolation domain, the properties through-thickness distribution, the influence of considering different materials, aspect ratios and boundary conditions. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a new method for self-localization of mobile robots, based on a PCA positioning sensor to operate in unstructured environments, is proposed and experimentally validated. The proposed PCA extension is able to perform the eigenvectors computation from a set of signals corrupted by missing data. The sensor package considered in this work contains a 2D depth sensor pointed upwards to the ceiling, providing depth images with missing data. The positioning sensor obtained is then integrated in a Linear Parameter Varying mobile robot model to obtain a self-localization system, based on linear Kalman filters, with globally stable position error estimates. A study consisting in adding synthetic random corrupted data to the captured depth images revealed that this extended PCA technique is able to reconstruct the signals, with improved accuracy. The self-localization system obtained is assessed in unstructured environments and the methodologies are validated even in the case of varying illumination conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biometric recognition is emerging has an alternative solution for applications where the privacy of the information is crucial. This paper presents an embedded biometric recognition system based on the Electrocardiographic signals (ECG) for individual identification and authentication. The proposed system implements a real-time state-of-the-art recognition algorithm, which extracts information from the frequency domain. The system is based on a ARM Cortex 4. Preliminary results show that embedded platforms are a promising path for the implementation of ECG-based applications in real-world scenario.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A correlation and predictive scheme for the viscosity and self-diffusivity of liquid dialkyl adipates is presented. The scheme is based on the kinetic theory for dense hard-sphere fluids, applied to the van der Waals model of a liquid to predict the transport properties. A "universal" curve for a dimensionless viscosity of dialkyl adipates was obtained using recently published experimental viscosity and density data of compressed liquid dimethyl (DMA), dipropyl (DPA), and dibutyl (DBA) adipates. The experimental data are described by the correlation scheme with a root-mean-square deviation of +/- 0.34 %. The parameters describing the temperature dependence of the characteristic volume, V-0, and the roughness parameter, R-eta, for each adipate are well correlated with one single molecular parameter. Recently published experimental self-diffusion coefficients of the same set of liquid dialkyl adipates at atmospheric pressure were correlated using the characteristic volumes obtained from the viscosity data. The roughness factors, R-D, are well correlated with the same single molecular parameter found for viscosity. The root-mean-square deviation of the data from the correlation is less than 1.07 %. Tests are presented in order to assess the capability of the correlation scheme to estimate the viscosity of compressed liquid diethyl adipate (DEA) in a range of temperatures and pressures by comparison with literature data and of its self-diffusivity at atmospheric pressure in a range of temperatures. It is noteworthy that no data for DEA were used to build the correlation scheme. The deviations encountered between predicted and experimental data for the viscosity and self-diffusivity do not exceed 2.0 % and 2.2 %, respectively, which are commensurate with the estimated experimental measurement uncertainty, in both cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increasing and intensive integration of distributed energy resources into distribution systems requires adequate methodologies to ensure a secure operation according to the smart grid paradigm. In this context, SCADA (Supervisory Control and Data Acquisition) systems are an essential infrastructure. This paper presents a conceptual design of a communication and resources management scheme based on an intelligent SCADA with a decentralized, flexible, and intelligent approach, adaptive to the context (context awareness). The methodology is used to support the energy resource management considering all the involved costs, power flows, and electricity prices leading to the network reconfiguration. The methodology also addresses the definition of the information access permissions of each player to each resource. The paper includes a 33-bus network used in a case study that considers an intensive use of distributed energy resources in five distinct implemented operation contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, vehicular cloud computing (VCC) has emerged as a new technology which is being used in wide range of applications in the area of multimedia-based healthcare applications. In VCC, vehicles act as the intelligent machines which can be used to collect and transfer the healthcare data to the local, or global sites for storage, and computation purposes, as vehicles are having comparatively limited storage and computation power for handling the multimedia files. However, due to the dynamic changes in topology, and lack of centralized monitoring points, this information can be altered, or misused. These security breaches can result in disastrous consequences such as-loss of life or financial frauds. Therefore, to address these issues, a learning automata-assisted distributive intrusion detection system is designed based on clustering. Although there exist a number of applications where the proposed scheme can be applied but, we have taken multimedia-based healthcare application for illustration of the proposed scheme. In the proposed scheme, learning automata (LA) are assumed to be stationed on the vehicles which take clustering decisions intelligently and select one of the members of the group as a cluster-head. The cluster-heads then assist in efficient storage and dissemination of information through a cloud-based infrastructure. To secure the proposed scheme from malicious activities, standard cryptographic technique is used in which the auotmaton learns from the environment and takes adaptive decisions for identification of any malicious activity in the network. A reward and penalty is given by the stochastic environment where an automaton performs its actions so that it updates its action probability vector after getting the reinforcement signal from the environment. The proposed scheme was evaluated using extensive simulations on ns-2 with SUMO. The results obtained indicate that the proposed scheme yields an improvement of 10 % in detection rate of malicious nodes when compared with the existing schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

IEEE 802.11 is one of the most well-established and widely used standard for wireless LAN. Its Medium Access control (MAC) layer assumes that the devices adhere to the standard’s rules and timers to assure fair access and sharing of the medium. However, wireless cards driver flexibility and configurability make it possible for selfish misbehaving nodes to take advantages over the other well-behaving nodes. The existence of selfish nodes degrades the QoS for the other devices in the network and may increase their energy consumption. In this paper we propose a green solution for selfish misbehavior detection in IEEE 802.11-based wireless networks. The proposed scheme works in two phases: Global phase which detects whether the network contains selfish nodes or not, and Local phase which identifies which node or nodes within the network are selfish. Usually, the network must be frequently examined for selfish nodes during its operation since any node may act selfishly. Our solution is green in the sense that it saves the network resources as it avoids wasting the nodes energy by examining all the individual nodes of being selfish when it is not necessary. The proposed detection algorithm is evaluated using extensive OPNET simulations. The results show that the Global network metric clearly indicates the existence of a selfish node while the Local nodes metric successfully identified the selfish node(s). We also provide mathematical analysis for the selfish misbehaving and derived formulas for the successful channel access probability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work is a contribution to the definition and assessment of structural robustness. Special emphasis is given to reliability of reinforced concrete structures under corrosion of longitudinal reinforcement. On this communication several authors’ proposals in order to define and measure structural robustness are analyzed and discussed. The probabilistic based robustness index is defined, considering the reliability index decreasing for all possible damage levels. Damage is considered as the corrosion level of the longitudinal reinforcement in terms of rebar weight loss. Damage produces changes in both cross sectional area of rebar and bond strength. The proposed methodology is illustrated by means of an application example. In order to consider the impact of reinforcement corrosion on failure probability growth, an advanced methodology based on the strong discontinuities approach and an isotropic continuum damage model for concrete is adopted. The methodology consist on a two-step analysis: on the first step an analysis of the cross section is performed in order to capture phenomena such as expansion of the reinforcement due to the corrosion products accumulation and damage and cracking in the reinforcement surrounding concrete; on the second step a 2D deteriorated structural model is built with the results obtained on the first step of the analysis. The referred methodology combined with a Monte Carlo simulation is then used to compute the failure probability and the reliability index of the structure for different corrosion levels. Finally, structural robustness is assessed using the proposed probabilistic index.