965 resultados para Recognition algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nowadays, a lot of applications use digital images. For example in face recognition to detect and tag persons in photograph, for security control, and a lot of applications that can be found in smart cities, as speed control in roads or highways and cameras in traffic lights to detect drivers ignoring red light. Also in medicine digital images are used, such as x-ray, scanners, etc. These applications depend on the quality of the image obtained. A good camera is expensive, and the image obtained depends also on external factor as light. To make these applications work properly, image enhancement is as important as, for example, a good face detection algorithm. Image enhancement also can be used in normal photograph, for pictures done in bad light conditions, or just to improve the contrast of an image. There are some applications for smartphones that allow users apply filters or change the bright, colour or contrast on the pictures. This project compares four different techniques to use in image enhancement. After applying one of these techniques to an image, it will use better the whole available dynamic range. Some of the algorithms are designed for grey scale images and others for colour images. It is used Matlab software to develop and present the final results. These algorithms are Successive Means Quantization Transform (SMQT), Histogram Equalization, using Matlab function and own implemented function, and V transform. Finally, as conclusions, we can prove that Histogram equalization algorithm is the simplest of all, it has a wide variability of grey levels and it is not suitable for colour images. V transform algorithm is a good option for colour images. The algorithm is linear and requires low computational power. SMQT algorithm is non-linear, insensitive to gain and bias and it can extract structure of the data. RESUMEN. Hoy en día incontable número de aplicaciones usan imágenes digitales. Por ejemplo, para el control de la seguridad se usa el reconocimiento de rostros para detectar y etiquetar personas en fotografías o vídeos, para distintos usos de las ciudades inteligentes, como control de velocidad en carreteras o autopistas, cámaras en los semáforos para detectar a conductores haciendo caso omiso de un semáforo en rojo, etc. También en la medicina se utilizan imágenes digitales, como por ejemplo, rayos X, escáneres, etc. Todas estas aplicaciones dependen de la calidad de la imagen obtenida. Una buena cámara es cara, y la imagen obtenida depende también de factores externos como la luz. Para hacer que estas aplicaciones funciones correctamente, el tratamiento de imagen es tan importante como, por ejemplo, un buen algoritmo de detección de rostros. La mejora de la imagen también se puede utilizar en la fotografía no profesional o de consumo, para las fotos realizadas en malas condiciones de luz, o simplemente para mejorar el contraste de una imagen. Existen aplicaciones para teléfonos móviles que permiten a los usuarios aplicar filtros y cambiar el brillo, el color o el contraste en las imágenes. Este proyecto compara cuatro técnicas diferentes para utilizar el tratamiento de imagen. Se utiliza la herramienta de software matemático Matlab para desarrollar y presentar los resultados finales. Estos algoritmos son Successive Means Quantization Transform (SMQT), Ecualización del histograma, usando la propia función de Matlab y una nueva función que se desarrolla en este proyecto y, por último, una función de transformada V. Finalmente, como conclusión, podemos comprobar que el algoritmo de Ecualización del histograma es el más simple de todos, tiene una amplia variabilidad de niveles de gris y no es adecuado para imágenes en color. El algoritmo de transformada V es una buena opción para imágenes en color, es lineal y requiere baja potencia de cálculo. El algoritmo SMQT no es lineal, insensible a la ganancia y polarización y, gracias a él, se puede extraer la estructura de los datos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this Master Thesis is the analysis, design and development of a robust and reliable Human-Computer Interaction interface, based on visual hand-gesture recognition. The implementation of the required functions is oriented to the simulation of a classical hardware interaction device: the mouse, by recognizing a specific hand-gesture vocabulary in color video sequences. For this purpose, a prototype of a hand-gesture recognition system has been designed and implemented, which is composed of three stages: detection, tracking and recognition. This system is based on machine learning methods and pattern recognition techniques, which have been integrated together with other image processing approaches to get a high recognition accuracy and a low computational cost. Regarding pattern recongition techniques, several algorithms and strategies have been designed and implemented, which are applicable to color images and video sequences. The design of these algorithms has the purpose of extracting spatial and spatio-temporal features from static and dynamic hand gestures, in order to identify them in a robust and reliable way. Finally, a visual database containing the necessary vocabulary of gestures for interacting with the computer has been created.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vision extracts useful information from images. Reconstructing the three-dimensional structure of our environment and recognizing the objects that populate it are among the most important functions of our visual system. Computer vision researchers study the computational principles of vision and aim at designing algorithms that reproduce these functions. Vision is difficult: the same scene may give rise to very different images depending on illumination and viewpoint. Typically, an astronomical number of hypotheses exist that in principle have to be analyzed to infer a correct scene description. Moreover, image information might be extracted at different levels of spatial and logical resolution dependent on the image processing task. Knowledge of the world allows the visual system to limit the amount of ambiguity and to greatly simplify visual computations. We discuss how simple properties of the world are captured by the Gestalt rules of grouping, how the visual system may learn and organize models of objects for recognition, and how one may control the complexity of the description that the visual system computes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past decade, tremendous advances in the state of the art of automatic speech recognition by machine have taken place. A reduction in the word error rate by more than a factor of 5 and an increase in recognition speeds by several orders of magnitude (brought about by a combination of faster recognition search algorithms and more powerful computers), have combined to make high-accuracy, speaker-independent, continuous speech recognition for large vocabularies possible in real time, on off-the-shelf workstations, without the aid of special hardware. These advances promise to make speech recognition technology readily available to the general public. This paper focuses on the speech recognition advances made through better speech modeling techniques, chiefly through more accurate mathematical modeling of speech sounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Paper submitted to MML 2013, 6th International Workshop on Machine Learning and Music, Prague, September 23, 2013.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this project, we propose the implementation of a 3D object recognition system which will be optimized to operate under demanding time constraints. The system must be robust so that objects can be recognized properly in poor light conditions and cluttered scenes with significant levels of occlusion. An important requirement must be met: the system must exhibit a reasonable performance running on a low power consumption mobile GPU computing platform (NVIDIA Jetson TK1) so that it can be integrated in mobile robotics systems, ambient intelligence or ambient assisted living applications. The acquisition system is based on the use of color and depth (RGB-D) data streams provided by low-cost 3D sensors like Microsoft Kinect or PrimeSense Carmine. The range of algorithms and applications to be implemented and integrated will be quite broad, ranging from the acquisition, outlier removal or filtering of the input data and the segmentation or characterization of regions of interest in the scene to the very object recognition and pose estimation. Furthermore, in order to validate the proposed system, we will create a 3D object dataset. It will be composed by a set of 3D models, reconstructed from common household objects, as well as a handful of test scenes in which those objects appear. The scenes will be characterized by different levels of occlusion, diverse distances from the elements to the sensor and variations on the pose of the target objects. The creation of this dataset implies the additional development of 3D data acquisition and 3D object reconstruction applications. The resulting system has many possible applications, ranging from mobile robot navigation and semantic scene labeling to human-computer interaction (HCI) systems based on visual information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Beyond the inherent technical challenges, current research into the three dimensional surface correspondence problem is hampered by a lack of uniform terminology, an abundance of application specific algorithms, and the absence of a consistent model for comparing existing approaches and developing new ones. This paper addresses these challenges by presenting a framework for analysing, comparing, developing, and implementing surface correspondence algorithms. The framework uses five distinct stages to establish correspondence between surfaces. It is general, encompassing a wide variety of existing techniques, and flexible, facilitating the synthesis of new correspondence algorithms. This paper presents a review of existing surface correspondence algorithms, and shows how they fit into the correspondence framework. It also shows how the framework can be used to analyse and compare existing algorithms and develop new algorithms using the framework's modular structure. Six algorithms, four existing and two new, are implemented using the framework. Each implemented algorithm is used to match a number of surface pairs. Results demonstrate that the correspondence framework implementations are faithful implementations of existing algorithms, and that powerful new surface correspondence algorithms can be created. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a video-based system which interactively captures the geometry of a 3D object in the form of a point cloud, then recognizes and registers known objects in this point cloud in a matter of seconds (fig. 1). In order to achieve interactive speed, we exploit both efficient inference algorithms and parallel computation, often on a GPU. The system can be broken down into two distinct phases: geometry capture, and object inference. We now discuss these in further detail. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Algorithmic resources are considered for elaboration and identification of monotone functions and some alternate structures are brought, which are more explicit in sense of structure and quantities and which can serve as elements of practical identification algorithms. General monotone recognition is considered on multi- dimensional grid structure. Particular reconstructing problem is reduced to the monotone recognition through the multi-dimensional grid partitioning into the set of binary cubes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The concept of knowledge is the central one used when solving the various problems of data mining and pattern recognition in finite spaces of Boolean or multi-valued attributes. A special form of knowledge representation, called implicative regularities, is proposed for applying in two powerful tools of modern logic: the inductive inference and the deductive inference. The first one is used for extracting the knowledge from the data. The second is applied when the knowledge is used for calculation of the goal attribute values. A set of efficient algorithms was developed for that, dealing with Boolean functions and finite predicates represented by logical vectors and matrices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The neural-like growing networks used in the intelligent system of recognition of images are under consideration in this paper. All operations made over the image on a pre-design stage and also classification and storage of the information about the images and their further identification are made extremely by mechanisms of neural-like networks without usage of complex algorithms requiring considerable volumes of calculus. At the conforming hardware support the neural network methods allow considerably to increase the effectiveness of the solution of the given class of problems, saving a high accuracy of result and high level of response, both in a mode of training, and in a mode of identification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article the new approach for optimization of estimations calculating algorithms is suggested. It can be used for finding the correct algorithm of minimal complexity in the context of algebraic approach for pattern recognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The task of smooth and stable decision rules construction in logical recognition models is considered. Logical regularities of classes are defined as conjunctions of one-place predicates that determine the membership of features values in an intervals of the real axis. The conjunctions are true on a special no extending subsets of reference objects of some class and are optimal. The standard approach of linear decision rules construction for given sets of logical regularities consists in realization of voting schemes. The weighting coefficients of voting procedures are done as heuristic ones or are as solutions of complex optimization task. The modifications of linear decision rules are proposed that are based on the search of maximal estimations of standard objects for their classes and use approximations of logical regularities by smooth sigmoid functions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimates Calculating Algorithms have a long story of application to recognition problems. Furthermore they have formed a basis for algebraic recognition theory. Yet use of ECA polynomials was limited to theoretical reasoning because of complexity of their construction and optimization. The new recognition method “AVO- polynom” based upon ECA polynomial of simple structure is described.