826 resultados para 2D barcode based authentication scheme
Resumo:
A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.
Resumo:
Analysis of river flow using hydraulic modelling and its implications in derived environ-mental applications are inextricably connected with the way in which the river boundary shape is represented. This relationship is scale-dependent upon the modelling resolution which in turn determines the importance of a subscale performance of the model and the way subscale (surface and flow) processes are parameterised. Commonly, the subscale behaviour of the model relies upon a roughness parameterisation whose meaning depends on the dimensionality of the hydraulic model and the resolution of the topographic represen¬tation scale. This latter is, in turn, dependent on the resolution of the computational mesh as well as on the detail of measured topographic data. Flow results are affected by this interactions between scale and subscale parameterisation according to the dimensionality approach. The aim of this dissertation is the evaluation of these interactions upon hy¬draulic modelling results. Current high resolution topographic source availability induce this research which is tackled using a suitable roughness approach according to each di¬mensionality with the purpose of the interaction assessment. A 1D HEC-RAS model, a 2D raster-based diffusion-wave model with a scale-dependent distributed roughness parame-terisation and a 3D finite volume scheme with a porosity algorithm approach to incorporate complex topography have been used. Different topographic sources are assessed using a 1D scheme. LiDAR data are used to isolate the mesh resolution from the topographic content of the DEM effects upon 2D and 3D flow results. A distributed roughness parameterisation, using a roughness height approach dependent upon both mesh resolution and topographic content is developed and evaluated for the 2D scheme. Grain-size data and fractal methods are used for the reconstruction of topography with microscale information, required for some applications but not easily available. Sensitivity of hydraulic parameters to this topographic parameterisation is evaluated in a 3D scheme at different mesh resolu¬tions. Finally, the structural variability of simulated flow is analysed and related to scale interactions. Model simulations demonstrate (i) the importance of the topographic source in a 1D models; (ii) the mesh resolution approach is dominant in 2D and 3D simulations whereas in a 1D model the topographic source and even the roughness parameterisation impacts are more critical; (iii) the increment of the sensitivity to roughness parameterisa-tion in 1D and 2D schemes with detailed topographic sources and finer mesh resolutions; and (iv) the topographic content and microtopography impact throughout the vertical profile of computed 3D velocity in a depth-dependent way, whereas 2D results are not affected by topographic content variations. Finally, the spatial analysis shows that the mesh resolution controls high resolution model scale results, roughness parameterisation control 2D simulation results for a constant mesh resolution; and topographic content and micro-topography variations impacts upon the organisation of flow results depth-dependently in a 3D scheme. Resumen La topografía juega un papel fundamental en la distribución del agua y la energía en los paisajes naturales (Beven and Kirkby 1979; Wood et al. 1997). La simulación hidráulica combinada con métodos de medición del terreno por teledetección constituyen una poderosa herramienta de investigación en la comprensión del comportamiento de los flujos de agua debido a la variabilidad de la superficie sobre la que fluye. La representación e incorporación de la topografía en el esquema hidráulico tiene una importancia crucial en los resultados y determinan el desarrollo de sus aplicaciones al campo medioambiental. Cualquier simulación es una simplificación de un proceso del mundo real, y por tanto el grado de simplificación determinará el significado de los resultados simulados. Este razonamiento es particularmente difícil de trasladar a la simulación hidráulica donde aspectos de la escala tan diferentes como la escala de los procesos de flujo y de representación del contorno son considerados conjuntamente incluso en fases de parametrización (e.g. parametrización de la rugosidad). Por una parte, esto es debido a que las decisiones de escala vienen condicionadas entre ellas (e.g. la dimensionalidad del modelo condiciona la escala de representación del contorno) y por tanto interaccionan en sus resultados estrechamente. Y por otra parte, debido a los altos requerimientos numéricos y computacionales de una representación explícita de alta resolución de los procesos de flujo y discretización de la malla. Además, previo a la modelización hidráulica, la superficie del terreno sobre la que el agua fluye debe ser modelizada y por tanto presenta su propia escala de representación, que a su vez dependerá de la escala de los datos topográficos medidos con que se elabora el modelo. En última instancia, esta topografía es la que determina el comportamiento espacial del flujo. Por tanto, la escala de la topografía en sus fases de medición y modelización (resolución de los datos y representación topográfica) previas a su incorporación en el modelo hidráulico producirá a su vez un impacto que se acumulará al impacto global resultante debido a la escala computacional del modelo hidráulico y su dimensión. La comprensión de las interacciones entre las complejas geometrías del contorno y la estructura del flujo utilizando la modelización hidráulica depende de las escalas consideradas en la simplificación de los procesos hidráulicos y del terreno (dimensión del modelo, tamaño de escala computacional y escala de los datos topográficos). La naturaleza de la aplicación del modelo hidráulico (e.g. habitat físico, análisis de riesgo de inundaciones, transporte de sedimentos) determina en primer lugar la escala del estudio y por tanto el detalle de los procesos a simular en el modelo (i.e. la dimensionalidad) y, en consecuencia, la escala computacional a la que se realizarán los cálculos (i.e. resolución computacional). Esta última a su vez determina, el detalle geográfico con que deberá representarse el contorno acorde con la resolución de la malla computacional. La parametrización persigue incorporar en el modelo hidráulico la cuantificación de los procesos y condiciones físicas del sistema natural y por tanto debe incluir no solo aquellos procesos que tienen lugar a la escala de modelización, sino también aquellos que tienen lugar a un nivel subescalar y que deben ser definidos mediante relaciones de escalado con las variables modeladas explícitamente. Dicha parametrización se implementa en la práctica mediante la provisión de datos al modelo, por tanto la escala de los datos geográficos utilizados para parametrizar el modelo no sólo influirá en los resultados, sino también determinará la importancia del comportamiento subescalar del modelo y el modo en que estos procesos deban ser parametrizados (e.g. la variabilidad natural del terreno dentro de la celda de discretización o el flujo en las direcciones laterales y verticales en un modelo unidimensional). En esta tesis, se han utilizado el modelo unidimensional HEC-RAS, (HEC 1998b), un modelo ráster bidimensional de propagación de onda, (Yu 2005) y un esquema tridimensional de volúmenes finitos con un algoritmo de porosidad para incorporar la topografía, (Lane et al. 2004; Hardy et al. 2005). La geometría del contorno viene definida por la escala de representación topográfica (resolución de malla y contenido topográfico), la cual a su vez depende de la escala de la fuente cartográfica. Todos estos factores de escala interaccionan en la respuesta del modelo hidráulico a la topografía. En los últimos años, métodos como el análisis fractal y las técnicas geoestadísticas utilizadas para representar y analizar elementos geográficos (e.g. en la caracterización de superficies (Herzfeld and Overbeck 1999; Butler et al. 2001)), están promoviendo nuevos enfoques en la cuantificación de los efectos de escala (Lam et al. 2004; Atkinson and Tate 2000; Lam et al. 2006) por medio del análisis de la estructura espacial de la variable (e.g. Bishop et al. 2006; Ju et al. 2005; Myint et al. 2004; Weng 2002; Bian and Xie 2004; Southworth et al. 2006; Pozd-nyakova et al. 2005; Kyriakidis and Goodchild 2006). Estos métodos cuantifican tanto el rango de valores de la variable presentes a diferentes escalas como la homogeneidad o heterogeneidad de la variable espacialmente distribuida (Lam et al. 2004). En esta tesis, estas técnicas se han utilizado para analizar el impacto de la topografía sobre la estructura de los resultados hidráulicos simulados. Los datos de teledetección de alta resolución y técnicas GIS también están siendo utilizados para la mejor compresión de los efectos de escala en modelos medioambientales (Marceau 1999; Skidmore 2002; Goodchild 2003) y se utilizan en esta tesis. Esta tesis como corpus de investigación aborda las interacciones de esas escalas en la modelización hidráulica desde un punto de vista global e interrelacionado. Sin embargo, la estructura y el foco principal de los experimentos están relacionados con las nociones espaciales de la escala de representación en relación con una visión global de las interacciones entre escalas. En teoría, la representación topográfica debe caracterizar la superficie sobre la que corre el agua a una adecuada (conforme a la finalidad y dimensión del modelo) escala de discretización, de modo que refleje los procesos de interés. La parametrización de la rugosidad debe de reflejar los efectos de la variabilidad de la superficie a escalas de más detalle que aquellas representadas explícitamente en la malla topográfica (i.e. escala de discretización). Claramente, ambos conceptos están físicamente relacionados por un
Resumo:
Collaborative sharing of information is becoming much more needed technique to achieve complex goals in today's fast-paced tech-dominant world. Personal Health Record (PHR) system has become a popular research area for sharing patients informa- tion very quickly among health professionals. PHR systems store and process sensitive information, which should have proper security mechanisms to protect patients' private data. Thus, access control mechanisms of the PHR should be well-defined. Secondly, PHRs should be stored in encrypted form. Cryptographic schemes offering a more suitable solution for enforcing access policies based on user attributes are needed for this purpose. Attribute-based encryption can resolve these problems, we propose a patient-centric framework that protects PHRs against untrusted service providers and malicious users. In this framework, we have used Ciphertext Policy Attribute Based Encryption scheme as an efficient cryptographic technique, enhancing security and privacy of the system, as well as enabling access revocation. Patients can encrypt their PHRs and store them on untrusted storage servers. They also maintain full control over access to their PHR data by assigning attribute-based access control to selected data users, and revoking unauthorized users instantly. In order to evaluate our system, we implemented CP-ABE library and web services as part of our framework. We also developed an android application based on the framework that allows users to register into the system, encrypt their PHR data and upload to the server, and at the same time authorized users can download PHR data and decrypt it. Finally, we present experimental results and performance analysis. It shows that the deployment of the proposed system would be practical and can be applied into practice.
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
We have recently shown that at isotopic steady state (13)C NMR can provide a direct measurement of glycogen concentration changes, but that the turnover of glycogen was not accessible with this protocol. The aim of the present study was to design, implement and apply a novel dual-tracer infusion protocol to simultaneously measure glycogen concentration and turnover. After reaching isotopic steady state for glycogen C1 using [1-(13)C] glucose administration, [1,6-(13)C(2)] glucose was infused such that isotopic steady state was maintained at the C1 position, but the C6 position reflected (13)C label incorporation. To overcome the large chemical shift displacement error between the C1 and C6 resonances of glycogen, we implemented 2D gradient based localization using the Fourier series window approach, in conjunction with time-domain analysis of the resulting FIDs using jMRUI. The glycogen concentration of 5.1 +/- 1.6 mM measured from the C1 position was in excellent agreement with concomitant biochemical determinations. Glycogen turnover measured from the rate of label incorporation into the C6 position of glycogen in the alpha-chloralose anesthetized rat was 0.7 micromol/g/h.
Resumo:
A new salicylic acid derivative, pentacosanyl salicylate, was isolated from the leaves of the plant toxic to cattle, Riedeliella graciliflora, in addition to a digalactosyldiacylglycerol (DGDG), 1,2-di-O-α-linolenoyl-3-O-α-D-galactopyranosyl-(1→6)-β-D-galactopyranosyl-glycerol, kaempferol-3-O-β-D-glucopyranoside, kaempferol-3-O-α-L-rhamnopyranoside, quercetin-3-O-α-L-rhamnopyranoside, rutin, (+)-catechin and the dimer (+)-catechin-(4β-8)-catechin, glutinol, squalene, β-sitosterol, stigmasterol, phytol, β-carotene, α-tocopherol and ficaprenol-12. Their structures were determined using spectral techniques (MS, IR, and NMR-1D and 2D) and based on literature data.
Resumo:
In recent years, the network vulnerability to natural hazards has been noticed. Moreover, operating on the limits of the network transmission capabilities have resulted in major outages during the past decade. One of the reasons for operating on these limits is that the network has become outdated. Therefore, new technical solutions are studied that could provide more reliable and more energy efficient power distributionand also a better profitability for the network owner. It is the development and price of power electronics that have made the DC distribution an attractive alternative again. In this doctoral thesis, one type of a low-voltage DC distribution system is investigated. Morespecifically, it is studied which current technological solutions, used at the customer-end, could provide better power quality for the customer when compared with the current system. To study the effect of a DC network on the customer-end power quality, a bipolar DC network model is derived. The model can also be used to identify the supply parameters when the V/kW ratio is approximately known. Although the model provides knowledge of the average behavior, it is shown that the instantaneous DC voltage ripple should be limited. The guidelines to choose an appropriate capacitance value for the capacitor located at the input DC terminals of the customer-end are given. Also the structure of the customer-end is considered. A comparison between the most common solutions is made based on their cost, energy efficiency, and reliability. In the comparison, special attention is paid to the passive filtering solutions since the filter is considered a crucial element when the lifetime expenses are determined. It is found out that the filter topology most commonly used today, namely the LC filter, does not provide economical advantage over the hybrid filter structure. Finally, some of the typical control system solutions are introduced and their shortcomings are presented. As a solution to the customer-end voltage regulation problem, an observer-based control scheme is proposed. It is shown how different control system structures affect the performance. The performance meeting the requirements is achieved by using only one output measurement, when operating in a rigid network. Similar performance can be achieved in a weak grid by DC voltage measurement. An additional improvement can be achieved when an adaptive gain scheduling-based control is introduced. As a conclusion, the final power quality is determined by a sum of various factors, and the thesis provides the guidelines for designing the system that improves the power quality experienced by the customer.
Resumo:
Les données sur l'utilisation des médicaments sont généralement recueillies dans la recherche clinique. Pourtant, aucune méthode normalisée pour les catégoriser n’existe, que ce soit pour la description des échantillons ou pour l'étude de l'utilisation des médicaments comme une variable. Cette étude a été conçue pour développer un système de classification simple, sur une base empirique, pour la catégorisation d'utilisation des médicaments. Nous avons utilisé l'analyse factorielle pour réduire le nombre de groupements de médicaments possible. Cette analyse a fait émerger un modèle de constellations de consommation de médicaments qui semble caractériser des groupes cliniques spécifiques. Pour illustrer le potentiel de la technique, nous avons appliqué ce système de classification des échantillons où les troubles du sommeil sont importants: syndrome de fatigue chronique et l'apnée du sommeil. Notre méthode de classification a généré 5 facteurs qui semblent adhérer de façon logique. Ils ont été nommés: Médicaments cardiovasculaire/syndrome métabolique, Médicaments pour le soulagement des symptômes, Médicaments psychotropes, Médicaments préventifs et Médicaments hormonaux. Nos résultats démontrent que le profil des médicaments varie selon l'échantillon clinique. Le profil de médicament associé aux participants apnéiques reflète les conditions de comorbidité connues parmi ce groupe clinique, et le profil de médicament associé au Syndrome de fatigue chronique semble refléter la perception commune de cette condition comme étant un trouble psychogène
Resumo:
This paper deals with the problem of stabilizing a class of structures subject to an uncertain excitation due to the temporary coupling of the main system with another uncertain dynamical subsystem. A Lyapunov function based control scheme is proposed to attenuate the structural vibration. In the control design, the actuator dynamics is taken into account. The control scheme is implemented by using only feedback information of the main system. The effectiveness of the control scheme is shown for a bridge platform with crossing vehicle
Resumo:
Two wavelet-based control variable transform schemes are described and are used to model some important features of forecast error statistics for use in variational data assimilation. The first is a conventional wavelet scheme and the other is an approximation of it. Their ability to capture the position and scale-dependent aspects of covariance structures is tested in a two-dimensional latitude-height context. This is done by comparing the covariance structures implied by the wavelet schemes with those found from the explicit forecast error covariance matrix, and with a non-wavelet- based covariance scheme used currently in an operational assimilation scheme. Qualitatively, the wavelet-based schemes show potential at modeling forecast error statistics well without giving preference to either position or scale-dependent aspects. The degree of spectral representation can be controlled by changing the number of spectral bands in the schemes, and the least number of bands that achieves adequate results is found for the model domain used. Evidence is found of a trade-off between the localization of features in positional and spectral spaces when the number of bands is changed. By examining implied covariance diagnostics, the wavelet-based schemes are found, on the whole, to give results that are closer to diagnostics found from the explicit matrix than from the nonwavelet scheme. Even though the nature of the covariances has the right qualities in spectral space, variances are found to be too low at some wavenumbers and vertical correlation length scales are found to be too long at most scales. The wavelet schemes are found to be good at resolving variations in position and scale-dependent horizontal length scales, although the length scales reproduced are usually too short. The second of the wavelet-based schemes is often found to be better than the first in some important respects, but, unlike the first, it has no exact inverse transform.
Resumo:
Objective To evaluate the effectiveness of a voluntary sector based befriending scheme in improving psychological wellbeing and quality of life for family carers of people with dementia. Design Single blind randomised controlled trial. Setting Community settings in East Anglia and London. Participants 236 family carers of people with primary progressive dementia. Intervention Contact with a befriender facilitator and offer of match with a trained lay volunteer befriender compared with no befriender facilitator contact; all participants continued to receive “usual care.” Main outcome measures Carers’ mood (hospital anxiety and depression scale—depression) and health related quality of life (EuroQoL) at 15 months post-randomisation. Results The intention to treat analysis showed no benefit for the intervention “access to a befriender facilitator” on the primary outcome measure or on any of the secondary outcome measures. Conclusions In common with many carers’ services, befriending schemes are not taken up by all carers, and providing access to a befriending scheme is not effective in improving wellbeing.
Resumo:
The nonlinearity of high-power amplifiers (HPAs) has a crucial effect on the performance of multiple-input-multiple-output (MIMO) systems. In this paper, we investigate the performance of MIMO orthogonal space-time block coding (OSTBC) systems in the presence of nonlinear HPAs. Specifically, we propose a constellation-based compensation method for HPA nonlinearity in the case with knowledge of the HPA parameters at the transmitter and receiver, where the constellation and decision regions of the distorted transmitted signal are derived in advance. Furthermore, in the scenario without knowledge of the HPA parameters, a sequential Monte Carlo (SMC)-based compensation method for the HPA nonlinearity is proposed, which first estimates the channel-gain matrix by means of the SMC method and then uses the SMC-based algorithm to detect the desired signal. The performance of the MIMO-OSTBC system under study is evaluated in terms of average symbol error probability (SEP), total degradation (TD) and system capacity, in uncorrelated Nakagami-m fading channels. Numerical and simulation results are provided and show the effects on performance of several system parameters, such as the parameters of the HPA model, output back-off (OBO) of nonlinear HPA, numbers of transmit and receive antennas, modulation order of quadrature amplitude modulation (QAM), and number of SMC samples. In particular, it is shown that the constellation-based compensation method can efficiently mitigate the effect of HPA nonlinearity with low complexity and that the SMC-based detection scheme is efficient to compensate for HPA nonlinearity in the case without knowledge of the HPA parameters.
Resumo:
Using a cross-layer approach, two enhancement techniques applied for adaptive modulation and coding (AMC) with truncated automatic repeat request (T-ARQ) are investigated, namely, aggressive AMC (A-AMC) and constellation rearrangement (CoRe). Aggressive AMC selects the appropriate modulation and coding schemes (MCS) to achieve higher spectral efficiency, profiting from the feasibility of using different MCSs for retransmitting a packet, whereas in the CoRe-based AMC, retransmissions of the same data packet are performed using different mappings so as to provide different degrees of protection to the bits involved, thus achieving mapping diversity gain. The performance of both schemes is evaluated in terms of average spectral efficiency and average packet loss rate, which are derived in closed-form considering transmission over Nakagami-m fading channels. Numerical results and comparisons are provided. In particular, it is shown that A-AMC combined with T-ARQ yields higher spectral efficiency than the AMC-based conventional scheme while keeping the achieved packet loss rate closer to the system's requirement, and that it can achieve larger spectral efficiency objectives than that of the scheme using AMC along with CoRe.
Resumo:
The ever increasing spurt in digital crimes such as image manipulation, image tampering, signature forgery, image forgery, illegal transaction, etc. have hard pressed the demand to combat these forms of criminal activities. In this direction, biometrics - the computer-based validation of a persons' identity is becoming more and more essential particularly for high security systems. The essence of biometrics is the measurement of person’s physiological or behavioral characteristics, it enables authentication of a person’s identity. Biometric-based authentication is also becoming increasingly important in computer-based applications because the amount of sensitive data stored in such systems is growing. The new demands of biometric systems are robustness, high recognition rates, capability to handle imprecision, uncertainties of non-statistical kind and magnanimous flexibility. It is exactly here that, the role of soft computing techniques comes to play. The main aim of this write-up is to present a pragmatic view on applications of soft computing techniques in biometrics and to analyze its impact. It is found that soft computing has already made inroads in terms of individual methods or in combination. Applications of varieties of neural networks top the list followed by fuzzy logic and evolutionary algorithms. In a nutshell, the soft computing paradigms are used for biometric tasks such as feature extraction, dimensionality reduction, pattern identification, pattern mapping and the like.
Resumo:
This paper addresses biometric identification using large databases, in particular, iris databases. In such applications, it is critical to have low response time, while maintaining an acceptable recognition rate. Thus, the trade-off between speed and accuracy must be evaluated for processing and recognition parts of an identification system. In this paper, a graph-based framework for pattern recognition, called Optimum-Path Forest (OPF), is utilized as a classifier in a pre-developed iris recognition system. The aim of this paper is to verify the effectiveness of OPF in the field of iris recognition, and its performance for various scale iris databases. The existing Gauss-Laguerre Wavelet based coding scheme is used for iris encoding. The performance of the OPF and two other - Hamming and Bayesian - classifiers, is compared using small, medium, and large-scale databases. Such a comparison shows that the OPF has faster response for large-scale databases, thus performing better than the more accurate, but slower, classifiers.