971 resultados para multi-biometric fusion


Relevância:

30.00% 30.00%

Publicador:

Resumo:

When performing data fusion, one often measures where targets were and then wishes to deduce where targets currently are. There has been recent research on the processing of such out-of-sequence data. This research has culminated in the development of a number of algorithms for solving the associated tracking problem. This paper reviews these different approaches in a common Bayesian framework and proposes an architecture that orthogonalises the data association and out-of-sequence problems such that any combination of solutions to these two problems can be used together. The emphasis is not on advocating one approach over another on the basis of computational expense, but rather on understanding the relationships among the algorithms so that any approximations made are explicit. Results for a multi-sensor scenario involving out-of-sequence data association are used to illustrate the utility of this approach in a specific context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In data fusion systems, one often encounters measurements of past target locations and then wishes to deduce where the targets are currently located. Recent research on the processing of such out-of-sequence data has culminated in the development of a number of algorithms for solving the associated tracking problem. This paper reviews these different approaches in a common Bayesian framework and proposes an architecture that orthogonalises the data association and out-of-sequence problems such that any combination of solutions to these two problems can be used together. The emphasis is not on advocating one approach over another on the basis of computational expense, but rather on understanding the relationships between the algorithms so that any approximations made are explicit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multibiometrics aims at improving biometric security in presence of spoofing attempts, but exposes a larger availability of points of attack. Standard fusion rules have been shown to be highly sensitive to spoofing attempts – even in case of a single fake instance only. This paper presents a novel spoofing-resistant fusion scheme proposing the detection and elimination of anomalous fusion input in an ensemble of evidence with liveness information. This approach aims at making multibiometric systems more resistant to presentation attacks by modeling the typical behaviour of human surveillance operators detecting anomalies as employed in many decision support systems. It is shown to improve security, while retaining the high accuracy level of standard fusion approaches on the latest Fingerprint Liveness Detection Competition (LivDet) 2013 dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biometrics is one of the biggest tendencies in human identification. The fingerprint is the most widely used biometric. However considering the automatic fingerprint recognition a completely solved problem is a common mistake. The most popular and extensively used methods, the minutiae-based, do not perform well on poor-quality images and when just a small area of overlap between the template and the query images exists. The use of multibiometrics is considered one of the keys to overcome the weakness and improve the accuracy of biometrics systems. This paper presents the fusion of a minutiae-based and a ridge-based fingerprint recognition method at rank, decision and score level. The fusion techniques implemented leaded to a reduction of the Equal Error Rate by 31.78% (from 4.09% to 2.79%) and a decreasing of 6 positions in the rank to reach a Correct Retrieval (from rank 8 to 2) when assessed in the FVC2002-DB1A database. © 2008 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of physical characteristics for human identification is known as biometrics. Among the many biometrics traits available, the fingerprint is the most widely used. The fingerprint identification is based on the impression patterns, as the pattern of ridges and minutiae, characteristics of first and second levels respectively. The current identification systems use these two levels of fingerprint features due to the low cost of the sensors. However, the recent advances in sensor technology, became possible to use third level features present within the ridges, such as the perspiration pores. Recent studies show that the use of third-level features can increase security and fraud protection in biometric systems, since they are difficult to reproduce. In addition, recent researches have also focused on multibiometrics recognition due to its many advantages. The goal of this research project was to apply fusion techniques for fingerprint recognition in order to combine minutia, ridges and pore-based methods and, thus, provide more robust biometrics recognition systems, and also to develop an automated fingerprint identification system using these three methods of recognition. We evaluated isotropic-based and adaptive-based automatic pore extraction methods, and the fusion of pore-based method with the identification methods based on minutiae and ridges. The experiments were performed on the public database PolyUHRF and showed a reduction of approximately 16% in the EER compared to the best results obtained by the methods individually

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cell-based therapies and tissue engineering initiatives are gathering clinical momentum for next-generation treatment of tissue deficiencies. By using gravity-enforced self-assembly of monodispersed primary cells, we have produced adult and neonatal rat cardiomyocyte-based myocardial microtissues that could optionally be vascularized following coating with human umbilical vein endothelial cells (HUVECs). Within myocardial microtissues, individual cardiomyocytes showed native-like cell shape and structure, and established electrochemical coupling via intercalated disks. This resulted in the coordinated beating of microtissues, which was recorded by means of a multi-electrode complementary metal-oxide-semiconductor microchip. Myocardial microtissues (microm3 scale), coated with HUVECs and cast in a custom-shaped agarose mold, assembled to coherent macrotissues (mm3 scale), characterized by an extensive capillary network with typical vessel ultrastructures. Following implantation into chicken embryos, myocardial microtissues recruited the embryo's capillaries to functionally vascularize the rat-derived tissue implant. Similarly, transplantation of rat myocardial microtissues into the pericardium of adult rats resulted in time-dependent integration of myocardial microtissues and co-alignment of implanted and host cardiomyocytes within 7 days. Myocardial microtissues and custom-shaped macrotissues produced by cellular self-assembly exemplify the potential of artificial tissue implants for regenerative medicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation investigates high performance cooperative localization in wireless environments based on multi-node time-of-arrival (TOA) and direction-of-arrival (DOA) estimations in line-of-sight (LOS) and non-LOS (NLOS) scenarios. Here, two categories of nodes are assumed: base nodes (BNs) and target nodes (TNs). BNs are equipped with antenna arrays and capable of estimating TOA (range) and DOA (angle). TNs are equipped with Omni-directional antennas and communicate with BNs to allow BNs to localize TNs; thus, the proposed localization is maintained by BNs and TNs cooperation. First, a LOS localization method is proposed, which is based on semi-distributed multi-node TOA-DOA fusion. The proposed technique is applicable to mobile ad-hoc networks (MANETs). We assume LOS is available between BNs and TNs. One BN is selected as the reference BN, and other nodes are localized in the coordinates of the reference BN. Each BN can localize TNs located in its coverage area independently. In addition, a TN might be localized by multiple BNs. High performance localization is attainable via multi-node TOA-DOA fusion. The complexity of the semi-distributed multi-node TOA-DOA fusion is low because the total computational load is distributed across all BNs. To evaluate the localization accuracy of the proposed method, we compare the proposed method with global positioning system (GPS) aided TOA (DOA) fusion, which are applicable to MANETs. The comparison criterion is the localization circular error probability (CEP). The results confirm that the proposed method is suitable for moderate scale MANETs, while GPS-aided TOA fusion is suitable for large scale MANETs. Usually, TOA and DOA of TNs are periodically estimated by BNs. Thus, Kalman filter (KF) is integrated with multi-node TOA-DOA fusion to further improve its performance. The integration of KF and multi-node TOA-DOA fusion is compared with extended-KF (EKF) when it is applied to multiple TOA-DOA estimations made by multiple BNs. The comparison depicts that it is stable (no divergence takes place) and its accuracy is slightly lower than that of the EKF, if the EKF converges. However, the EKF may diverge while the integration of KF and multi-node TOA-DOA fusion does not; thus, the reliability of the proposed method is higher. In addition, the computational complexity of the integration of KF and multi-node TOA-DOA fusion is much lower than that of EKF. In wireless environments, LOS might be obstructed. This degrades the localization reliability. Antenna arrays installed at each BN is incorporated to allow each BN to identify NLOS scenarios independently. Here, a single BN measures the phase difference across two antenna elements using a synchronized bi-receiver system, and maps it into wireless channel’s K-factor. The larger K is, the more likely the channel would be a LOS one. Next, the K-factor is incorporated to identify NLOS scenarios. The performance of this system is characterized in terms of probability of LOS and NLOS identification. The latency of the method is small. Finally, a multi-node NLOS identification and localization method is proposed to improve localization reliability. In this case, multiple BNs engage in the process of NLOS identification, shared reflectors determination and localization, and NLOS TN localization. In NLOS scenarios, when there are three or more shared reflectors, those reflectors are localized via DOA fusion, and then a TN is localized via TOA fusion based on the localization of shared reflectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE To evaluate treatment response of hepatocellular carcinoma (HCC) after transarterial chemoembolization (TACE) with a new real-time imaging fusion technique of contrast-enhanced ultrasound (CEUS) with multi-slice detection computed tomography (CT) in comparison to conventional post-interventional follow-up. MATERIAL AND METHODS 40 patients with HCC (26 male, ages 46-81 years) were evaluated 24 hours after TACE using CEUS with ultrasound volume navigation and image fusion with CT compared to non-enhanced CT and follow-up contrast-enhanced CT after 6-8 weeks. Reduction of tumor vascularization to less than 25% was regarded as "successful" treatment, whereas reduction to levels >25% was considered as "partial" treatment response. Homogenous lipiodol retention was regarded as successful treatment in non-enhanced CT. RESULTS Post-interventional image fusion of CEUS with CT was feasible in all 40 patients. In 24 patients (24/40), post-interventional image fusion with CEUS revealed residual tumor vascularity, that was confirmed by contrast-enhanced CT 6-8 weeks later in 24/24 patients. In 16 patients (16/40), post-interventional image fusion with CEUS demonstrated successful treatment, but follow-up CT detected residual viable tumor (6/16). Non-enhanced CT did not identify any case of treatment failure. Image fusion with CEUS assessed treatment efficacy with a specificity of 100%, sensitivity of 80% and a positive predictive value of 1 (negative predictive value 0.63). CONCLUSIONS Image fusion of CEUS with CT allows a reliable, highly specific post-interventional evaluation of embolization response with good sensitivity without any further radiation exposure. It can detect residual viable tumor at early state, resulting in a close patient monitoring or re-therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the issue of fully automatic segmentation of a hip CT image with the goal to preserve the joint structure for clinical applications in hip disease diagnosis and treatment. For this purpose, we propose a Multi-Atlas Segmentation Constrained Graph (MASCG) method. The MASCG method uses multi-atlas based mesh fusion results to initialize a bone sheetness based multi-label graph cut for an accurate hip CT segmentation which has the inherent advantage of automatic separation of the pelvic region from the bilateral proximal femoral regions. We then introduce a graph cut constrained graph search algorithm to further improve the segmentation accuracy around the bilateral hip joint regions. Taking manual segmentation as the ground truth, we evaluated the present approach on 30 hip CT images (60 hips) with a 15-fold cross validation. When the present approach was compared to manual segmentation, an average surface distance error of 0.30 mm, 0.29 mm, and 0.30 mm was found for the pelvis, the left proximal femur, and the right proximal femur, respectively. A further look at the bilateral hip joint regions demonstrated an average surface distance error of 0.16 mm, 0.21 mm and 0.20 mm for the acetabulum, the left femoral head, and the right femoral head, respectively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Infrared (IR) interferometry is a method for measuring the line-electron density of fusion plasmas. The significant performance achieved by FPGAs in solving digital signal processing tasks advocates the use of this type of technology in two-color IR interferometers of modern stellarators, such as the TJ-II (Madrid, Spain) and the future W7-X (Greifswald, Germany). In this work the implementation of a line-average electron density measuring system in an FPGA device is described. Several optimizations for multichannel systems are detailed and test results from the TJ-II as well as from a W7-X prototype are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Activity recognition is an active research field nowadays, as it enables the development of highly adaptive applications, e.g. in the field of personal health. In this paper, a light high-level fusion algorithm to detect the activity that an individual is performing is presented. The algorithm relies on data gathered from accelerometers placed on different parts of the body, and on biometric sensors. Inertial sensors allow detecting activity by analyzing signal features such as amplitude or peaks. In addition, there is a relationship between the activity intensity and biometric response, which can be considered together with acceleration data to improve the accuracy of activity detection. The proposed algorithm is designed to work with minimum computational cost, being ready to run in a mobile device as part of a context-aware application. In order to enable different user scenarios, the algorithm offers best-effort activity estimation: its quality of estimation depends on the position and number of the available inertial sensors, and also on the presence of biometric information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hoy en día las técnicas de adquisición de imágenes tridimensionales son comunes en diversas áreas, pero cabe destacar la relevancia que han adquirido en el ámbito de la imagen biomédica, dentro del cual encontramos una amplia gama de técnicas como la microscopía confocal, microscopía de dos fotones, microscopía de fluorescencia mediante lámina de luz, resonancia magnética nuclear, tomografía por emisión de positrones, tomografía de coherencia óptica, ecografía 3D y un largo etcétera. Un denominador común de todas esas aplicaciones es la constante necesidad por aumentar la resolución y la calidad de las imágenes adquiridas. En algunas de dichas técnicas de imagen tridimensional se da una interesante situación: aunque que cada volumen adquirido no contiene información suficiente para representar el objeto bajo estudio dentro de los parámetros de calidad requeridos por algunas aplicaciones finales, el esquema de adquisición permite la obtención de varios volúmenes que representan diferentes vistas de dicho objeto, de tal forma que cada una de las vistas proporciona información complementaria acerca del mismo. En este tipo de situación es posible, mediante la combinación de varias de esas vistas, obtener una mejor comprensión del objeto que a partir de cada una de ellas por separado. En el contexto de esta Tesis Doctoral se ha propuesto, desarrollado y validado una nueva metodología de proceso de imágenes basada en la transformada wavelet disc¬reta para la combinación, o fusión, de varias vistas con información complementaria de un mismo objeto. El método de fusión propuesto aprovecha la capacidad de descom¬posición en escalas y orientaciones de la transformada wavelet discreta para integrar en un solo volumen toda la información distribuida entre el conjunto de vistas adquiridas. El trabajo se centra en dos modalidades diferentes de imagen biomédica que per¬miten obtener tales adquisiciones multi-vista. La primera es una variante de la micro¬scopía de fluorescencia, la microscopía de fluorescencia mediante lámina de luz, que se utiliza para el estudio del desarrollo temprano de embriones vivos en diferentes modelos animales, como el pez cebra o el erizo de mar. La segunda modalidad es la resonancia magnética nuclear con realce tardío, que constituye una valiosa herramienta para evaluar la viabilidad del tejido miocárdico en pacientes con diversas miocardiopatías. Como parte de este trabajo, el método propuesto ha sido aplicado y validado en am¬bas modalidades de imagen. En el caso de la aplicación a microscopía de fluorescencia, los resultados de la fusión muestran un mejor contraste y nivel de detalle en comparación con cualquiera de las vistas individuales y el método no requiere de conocimiento previo acerca la función de dispersión puntual del sistema de imagen. Además, los resultados se han comparado con otros métodos existentes. Con respecto a la aplicación a imagen de resonancia magnética con realce tardío, los volúmenes fusionados resultantes pre-sentan una mejora cuantitativa en la nitidez de las estructuras relevantes y permiten una interpretación más sencilla y completa de la compleja estructura tridimensional del tejido miocárdico en pacientes con cardiopatía isquémica. Para ambas aplicaciones los resultados de esta tesis se encuentran actualmente en uso en los centros clínicos y de investigación con los que el autor ha colaborado durante este trabajo. Además se ha puesto a libre disposición de la comunidad científica la implementación del método de fusión propuesto. Por último, se ha tramitado también una solicitud de patente internacional que cubre el método de visualización desarrollado para la aplicación de Resonancia Magnética Nuclear. Abstract Nowadays three dimensional imaging techniques are common in several fields, but es-pecially in biomedical imaging, where we can find a wide range of techniques including: Laser Scanning Confocal Microscopy, Laser Scanning Two Photon Microscopy, Light Sheet Fluorescence Microscopy, Magnetic Resonance Imaging, Positron Emission To-mography, Optical Coherence Tomography, 3D Ultrasound Imaging, etc. A common denominator of all those applications being the constant need for further increasing resolution and quality of the acquired images. Interestingly, in some of the mentioned three-dimensional imaging techniques a remarkable situation arises: while a single volume does not contain enough information to represent the object being imaged within the quality parameters required by the final application, the acquisition scheme allows recording several volumes which represent different views of a given object, with each of the views providing complementary information. In this kind of situation one can get a better understanding of the object by combining several views instead of looking at each of them separately. Within such context, in this PhD Thesis we propose, develop and test new image processing methodologies based on the discrete wavelet transform for the combination, or fusion, of several views containing complementary information of a given object. The proposed fusion method exploits the scale and orientation decomposition capabil¬ities of the discrete wavelet transform to integrate in a single volume all the available information distributed among the set of acquired views. The work focuses in two different biomedical imaging modalities which provide such multi-view datasets. The first one is a particular fluorescence microscopy technique, Light-Sheet Fluorescence Microscopy, used for imaging and gaining understanding of the early development of live embryos from different animal models (like zebrafish or sea urchin). The second is Delayed Enhancement Magnetic Resonance Imaging, which is a valuable tool for assessing the viability of myocardial tissue on patients suffering from different cardiomyopathies. As part of this work, the proposed method was implemented and then validated on both imaging modalities. For the fluorescence microscopy application, the fusion results show improved contrast and detail discrimination when compared to any of the individual views and the method does not rely on prior knowledge of the system’s point spread function (PSF). Moreover, the results have shown improved performance with respect to previous PSF independent methods. With respect to its application to Delayed Enhancement Magnetic Resonance Imaging, the resulting fused volumes show a quantitative sharpness improvement and enable an easier and more complete interpretation of complex three-dimensional scar and heterogeneous tissue information in ischemic cardiomyopathy patients. In both applications, the results of this thesis are currently in use in the clinical and research centers with which the author collaborated during his work. An imple¬mentation of the fusion method has also been made freely available to the scientific community. Finally, an international patent application has been filed covering the visualization method developed for the Magnetic Resonance Imaging application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the question of maximizing classifier accuracy for classifying task-related mental activity from Magnetoencelophalography (MEG) data. We propose the use of different sources of information and introduce an automatic channel selection procedure. To determine an informative set of channels, our approach combines a variety of machine learning algorithms: feature subset selection methods, classifiers based on regularized logistic regression, information fusion, and multiobjective optimization based on probabilistic modeling of the search space. The experimental results show that our proposal is able to improve classification accuracy compared to approaches whose classifiers use only one type of MEG information or for which the set of channels is fixed a priori.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decade, multi-sensor data fusion has become a broadly demanded discipline to achieve advanced solutions that can be applied in many real world situations, either civil or military. In Defence,accurate detection of all target objects is fundamental to maintaining situational awareness, to locating threats in the battlefield and to identifying and protecting strategically own forces. Civil applications, such as traffic monitoring, have similar requirements in terms of object detection and reliable identification of incidents in order to ensure safety of road users. Thanks to the appropriate data fusion technique, we can give these systems the power to exploit automatically all relevant information from multiple sources to face for instance mission needs or assess daily supervision operations. This paper focuses on its application to active vehicle monitoring in a particular area of high density traffic, and how it is redirecting the research activities being carried out in the computer vision, signal processing and machine learning fields for improving the effectiveness of detection and tracking in ground surveillance scenarios in general. Specifically, our system proposes fusion of data at a feature level which is extracted from a video camera and a laser scanner. In addition, a stochastic-based tracking which introduces some particle filters into the model to deal with uncertainty due to occlusions and improve the previous detection output is presented in this paper. It has been shown that this computer vision tracker contributes to detect objects even under poor visual information. Finally, in the same way that humans are able to analyze both temporal and spatial relations among items in the scene to associate them a meaning, once the targets objects have been correctly detected and tracked, it is desired that machines can provide a trustworthy description of what is happening in the scene under surveillance. Accomplishing so ambitious task requires a machine learning-based hierarchic architecture able to extract and analyse behaviours at different abstraction levels. A real experimental testbed has been implemented for the evaluation of the proposed modular system. Such scenario is a closed circuit where real traffic situations can be simulated. First results have shown the strength of the proposed system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the recent years, the computer vision community has shown great interest on depth-based applications thanks to the performance and flexibility of the new generation of RGB-D imagery. In this paper, we present an efficient background subtraction algorithm based on the fusion of multiple region-based classifiers that processes depth and color data provided by RGB-D cameras. Foreground objects are detected by combining a region-based foreground prediction (based on depth data) with different background models (based on a Mixture of Gaussian algorithm) providing color and depth descriptions of the scene at pixel and region level. The information given by these modules is fused in a mixture of experts fashion to improve the foreground detection accuracy. The main contributions of the paper are the region-based models of both background and foreground, built from the depth and color data. The obtained results using different database sequences demonstrate that the proposed approach leads to a higher detection accuracy with respect to existing state-of-the-art techniques.