911 resultados para Opencv, Zbar, Computer Vision


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a recursive rule base adjustment to enhance the performance of fuzzy logic controllers. Here the fuzzy controller is constructed on the basis of a decision table (DT), relying on membership functions and fuzzy rules that incorporate heuristic knowledge and operator experience. If the controller performance is not satisfactory, it has previously been suggested that the rule base be altered by combined tuning of membership functions and controller scaling factors. The alternative approach proposed here entails alteration of the fuzzy rule base. The recursive rule base adjustment algorithm proposed in this paper has the benefit that it is computationally more efficient for the generation of a DT, and advantage for online realization. Simulation results are presented to support this thesis. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel methodology is proposed for the development of neural network models for complex engineering systems exhibiting nonlinearity. This method performs neural network modeling by first establishing some fundamental nonlinear functions from a priori engineering knowledge, which are then constructed and coded into appropriate chromosome representations. Given a suitable fitness function, using evolutionary approaches such as genetic algorithms, a population of chromosomes evolves for a certain number of generations to finally produce a neural network model best fitting the system data. The objective is to improve the transparency of the neural networks, i.e. to produce physically meaningful

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Face recognition with unknown, partial distortion and occlusion is a practical problem, and has a wide range of applications, including security and multimedia information retrieval. The authors present a new approach to face recognition subject to unknown, partial distortion and occlusion. The new approach is based on a probabilistic decision-based neural network, enhanced by a statistical method called the posterior union model (PUM). PUM is an approach for ignoring severely mismatched local features and focusing the recognition mainly on the reliable local features. It thereby improves the robustness while assuming no prior information about the corruption. We call the new approach the posterior union decision-based neural network (PUDBNN). The new PUDBNN model has been evaluated on three face image databases (XM2VTS, AT&T and AR) using testing images subjected to various types of simulated and realistic partial distortion and occlusion. The new system has been compared to other approaches and has demonstrated improved performance.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A scale invariant feature transform (SIFT) based mean shift algorithm is presented for object tracking in real scenarios. SIFT features are used to correspond the region of interests across frames. Meanwhile, mean shift is applied to conduct similarity search via color histograms. The probability distributions from these two measurements are evaluated in an expectation–maximization scheme so as to achieve maximum likelihood estimation of similar regions. This mutual support mechanism can lead to consistent tracking performance if one of the two measurements becomes unstable. Experimental work demonstrates that the proposed mean shift/SIFT strategy improves the tracking performance of the classical mean shift and SIFT tracking algorithms in complicated real scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Logistic regression and Gaussian mixture model (GMM) classifiers have been trained to estimate the probability of acute myocardial infarction (AMI) in patients based upon the concentrations of a panel of cardiac markers. The panel consists of two new markers, fatty acid binding protein (FABP) and glycogen phosphorylase BB (GPBB), in addition to the traditional cardiac troponin I (cTnI), creatine kinase MB (CKMB) and myoglobin. The effect of using principal component analysis (PCA) and Fisher discriminant analysis (FDA) to preprocess the marker concentrations was also investigated. The need for classifiers to give an accurate estimate of the probability of AMI is argued and three categories of performance measure are described, namely discriminatory ability, sharpness, and reliability. Numerical performance measures for each category are given and applied. The optimum classifier, based solely upon the samples take on admission, was the logistic regression classifier using FDA preprocessing. This gave an accuracy of 0.85 (95% confidence interval: 0.78-0.91) and a normalised Brier score of 0.89. When samples at both admission and a further time, 1-6 h later, were included, the performance increased significantly, showing that logistic regression classifiers can indeed use the information from the five cardiac markers to accurately and reliably estimate the probability AMI. © Springer-Verlag London Limited 2008.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The least-mean-fourth (LMF) algorithm is known for its fast convergence and lower steady state error, especially in sub-Gaussian noise environments. Recent work on normalised versions of the LMF algorithm has further enhanced its stability and performance in both Gaussian and sub-Gaussian noise environments. For example, the recently developed normalised LMF (XE-NLMF) algorithm is normalised by the mixed signal and error powers, and weighted by a fixed mixed-power parameter. Unfortunately, this algorithm depends on the selection of this mixing parameter. In this work, a time-varying mixed-power parameter technique is introduced to overcome this dependency. A convergence analysis, transient analysis, and steady-state behaviour of the proposed algorithm are derived and verified through simulations. An enhancement in performance is obtained through the use of this technique in two different scenarios. Moreover, the tracking analysis of the proposed algorithm is carried out in the presence of two sources of nonstationarities: (1) carrier frequency offset between transmitter and receiver and (2) random variations in the environment. Close agreement between analysis and simulation results is obtained. The results show that, unlike in the stationary case, the steady-state excess mean-square error is not a monotonically increasing function of the step size. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The process of making replicas of heritage has traditionally been developed by public agencies, corporations and museums and is not commonly used in schools. Currently there are technologies that allow creating cheap replicas. The new 3D reconstruction software, based on photographs and low cost 3D printers allow to make replicas at a cost much lower than traditional. This article describes the process of creating replicas of the sculpture Goslar Warrior of artist Henry Moore, located in Santa Cruz de Tenerife. To make this process, first, a digital model have been created using Autodesk Recap 360, Autodesk 123D Catch and Autodesk Meshmixer MarkerBot MakerWare applications. Physical replication, has been reproduced in polylactic acid (PLA) by MakerBot Replicator 2 3D printer. In addition, a cost analysis using, in one hand, the printer mentioned, and in the other hand, 3D printing services both online and local, is included. Finally, there has been a specific action with 141 students and 12 high school teachers, who filled a questionnary about the use of sculptural replicas in education.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este estudio se evalúa el rendimiento de los métodos de Bag-of-Visualterms (BOV) para la clasificación automática de imágenes digitales de la base de datos del artista Miquel Planas. Estas imágenes intervienen en la ideación y diseño de su producción escultórica. Constituye un interesante desafío dada la dificultad de la categorización de escenas cuando éstas difieren más por los contenidos semánticos que por los objetos que contienen. Hemos empleado un método de reconocimiento basado en Kernels introducido por Lazebnik, Schmid y Ponce en 2006. Los resultados son prometedores, en promedio, la puntuación del rendimiento es aproximadamente del 70%. Los experimentos sugieren que la categorización automática de imágenes basada en métodos de visión artificial puede proporcionar principios objetivos en la catalogación de imágenes y que los resultados obtenidos pueden ser aplicados en diferentes campos de la creación artística.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Virtual reality has a number of advantages for analyzing sports interactions such as the standardization of experimental conditions, stereoscopic vision, and complete control of animated humanoid movement. Nevertheless, in order to be useful for sports applications, accurate perception of simulated movement in the virtual sports environment is essential. This perception depends on parameters of the synthetic character such as the number of degrees of freedom of its skeleton or the levels of detail (LOD) of its graphical representation. This study focuses on the influence of this latter parameter on the perception of the movement. In order to evaluate it, this study analyzes the judgments of immersed handball goalkeepers that play against a graphically modified virtual thrower. Five graphical representations of the throwing action were defined: a textured reference level (L0), a nontextured level (L1), a wire-frame level (L2), a moving point light display (MLD) level with a normal-sized ball (L3), and a MLD level where the ball is represented by a point of light (L4). The results show that judgments made by goalkeepers in the L4 condition are significantly less accurate than in all the other conditions (p