923 resultados para INTELIGENCIA ARTIFICIAL
Resumo:
A parallel algorithm to remove impulsive noise in digital images using heterogeneous CPU/GPU computing is proposed. The parallel denoising algorithm is based on the peer group concept and uses an Euclidean metric. In order to identify the amount of pixels to be allocated in multi-core and GPUs, a performance analysis using large images is presented. A comparison of the parallel implementation in multi-core, GPUs and a combination of both is performed. Performance has been evaluated in terms of execution time and Megapixels/second. We present several optimization strategies especially effective for the multi-core environment, and demonstrate significant performance improvements. The main advantage of the proposed noise removal methodology is its computational speed, which enables efficient filtering of color images in real-time applications.
Resumo:
The Conference Interpreter es un videojuego de simulación de la labor del intérprete de conferencias, en el que se reproduce un audio en lengua inglesa y en el que, con un lapso de un segundo y medio, aparece la transcripción del audio con determinados huecos para los que el jugador ha de seleccionar entre las distintas unidades semánticas propuestas en la parte inferior de la pantalla.
Resumo:
Tool path generation is one of the most complex problems in Computer Aided Manufacturing. Although some efficient strategies have been developed, most of them are only useful for standard machining. However, the algorithms used for tool path computation demand a higher computation performance, which makes the implementation on many existing systems very slow or even impractical. Hardware acceleration is an incremental solution that can be cleanly added to these systems while keeping everything else intact. It is completely transparent to the user. The cost is much lower and the development time is much shorter than replacing the computers by faster ones. This paper presents an optimisation that uses a specific graphic hardware approach using the power of multi-core Graphic Processing Units (GPUs) in order to improve the tool path computation. This improvement is applied on a highly accurate and robust tool path generation algorithm. The paper presents, as a case of study, a fully implemented algorithm used for turning lathe machining of shoe lasts. A comparative study will show the gain achieved in terms of total computing time. The execution time is almost two orders of magnitude faster than modern PCs.
Resumo:
We propose the design of a real-time system to recognize and interprethand gestures. The acquisition devices are low cost 3D sensors. 3D hand pose will be segmented, characterized and track using growing neural gas (GNG) structure. The capacity of the system to obtain information with a high degree of freedom allows the encoding of many gestures and a very accurate motion capture. The use of hand pose models combined with motion information provide with GNG permits to deal with the problem of the hand motion representation. A natural interface applied to a virtual mirrorwriting system and to a system to estimate hand pose will be designed to demonstrate the validity of the system.
Resumo:
3D sensors provides valuable information for mobile robotic tasks like scene classification or object recognition, but these sensors often produce noisy data that makes impossible applying classical keypoint detection and feature extraction techniques. Therefore, noise removal and downsampling have become essential steps in 3D data processing. In this work, we propose the use of a 3D filtering and down-sampling technique based on a Growing Neural Gas (GNG) network. GNG method is able to deal with outliers presents in the input data. These features allows to represent 3D spaces, obtaining an induced Delaunay Triangulation of the input space. Experiments show how the state-of-the-art keypoint detectors improve their performance using GNG output representation as input data. Descriptors extracted on improved keypoints perform better matching in robotics applications as 3D scene registration.
Resumo:
Material docente de la asignatura Aplicación de Ordenadores, de 3º curso de Ingeniería Técnica de Obras Públicas.
Resumo:
Comunicación presentada en CoSECiVi 2014, I Congreso de la Sociedad Española para las Ciencias del Videojuego, Barcelona, 24 de junio de 2014.
Resumo:
The potential of integrating multiagent systems and virtual environments has not been exploited to its whole extent. This paper proposes a model based on grammars, called Minerva, to construct complex virtual environments that integrate the features of agents. A virtual world is described as a set of dynamic and static elements. The static part is represented by a sequence of primitives and transformations and the dynamic elements by a series of agents. Agent activation and communication is achieved using events, created by the so-called event generators. The grammar defines a descriptive language with a simple syntax and a semantics, defined by functions. The semantics functions allow the scene to be displayed in a graphics device, and the description of the activities of the agents, including artificial intelligence algorithms and reactions to physical phenomena. To illustrate the use of Minerva, a practical example is presented: a simple robot simulator that considers the basic features of a typical robot. The result is a functional simple simulator. Minerva is a reusable, integral, and generic system, which can be easily scaled, adapted, and improved. The description of the virtual scene is independent from its representation and the elements that it interacts with.
Resumo:
Different kinds of algorithms can be chosen so as to compute elementary functions. Among all of them, it is worthwhile mentioning the shift-and-add algorithms due to the fact that they have been specifically designed to be very simple and to save computer resources. In fact, almost the only operations usually involved with these methods are additions and shifts, which can be easily and efficiently performed by a digital processor. Shift-and-add algorithms allow fairly good precision with low cost iterations. The most famous algorithm belonging to this type is CORDIC. CORDIC has the capability of approximating a wide variety of functions with only the help of a slight change in their iterations. In this paper, we will analyze the requirements of some engineering and industrial problems in terms of type of operands and functions to approximate. Then, we will propose the application of shift-and-add algorithms based on CORDIC to these problems. We will make a comparison between the different methods applied in terms of the precision of the results and the number of iterations required.
Resumo:
These days as we are facing extremely powerful attacks on servers over the Internet (say, by the Advanced Persistent Threat attackers or by Surveillance by powerful adversary), Shamir has claimed that “Cryptography is Ineffective”and some understood it as “Cryptography is Dead!” In this talk I will discuss the implications on cryptographic systems design while facing such strong adversaries. Is crypto dead or we need to design it better, taking into account, mathematical constraints, but also systems vulnerability constraints. Can crypto be effective at all when your computer or your cloud is penetrated? What is lost and what can be saved? These are very basic issues at this point of time, when we are facing potential loss of privacy and security.
Resumo:
La Criptografía Basada en la Identidad hace uso de curvas elípticas que satisfacen ciertas condiciones (pairingfriendly curves), en particular, el grado de inmersión de dichas curvas debe ser pequeño. En este trabajo se obtienen familias explicitas de curvas elípticas idóneas para este escenario. Dicha criptografía está basada en el cálculo de emparejamientos sobre curvas, cálculo factible gracias al algoritmo de Miller. Proponemos una versión más eficiente que la clásica de este algoritmo usando la representación de un número en forma no adyacente (NAF).
Resumo:
La Internet de las cosas (IoT, Internet of Things) es un paradigma emergente que pretende la interconexión de cualquier objeto susceptible de contar con una parte de electrónica, favorecido por la miniaturización de los componentes. El estado de desarrollo de la IoT hace que no haya ninguna propuesta firme para garantizar la seguridad y la comunicación extremo a extremo. En este artículo presentamos un trabajo en progreso hacia una aproximación tolerante a retrasos (DTN, Delay and Disruption Tolerant Networks) para la comunicación en el paradigma de la IoT y planteamos la adaptación de los mecanismo de seguridad existentes en DTN a la IoT.
Resumo:
La gran mayoría de modelos matemáticos propuestos hasta la fecha para simular la propagación del malware están basados en el uso de ecuaciones diferenciales. Dichos modelos son analizados de manera crítica en este trabajo, determinando las principales deficiencias que presentan y planteando distintas alternativas para su subsanación. En este sentido, se estudia el uso de los autómatas celulares como nuevo paradigma en el que basar los modelos epidemiológicos, proponiendo una alternativa explícita basada en ellos a un reciente modelo continuo.
Resumo:
La importancia de asegurar la comunicación entre personas ha crecido a medida que se ha avanzado en la sofisticación y el alcance de los mecanismos provistos para ello. Ahora, en la era digital, el alcance de estas comunicaciones es global y surge la necesidad de confiar en infraestructuras que suplan la imposibilidad de identificar a ambos extremos de la comunicación. Es la infraestructura de autoridades de certificación y la gestión correcta de certificados digitales la que ha facilitado una aproximación más eficiente para cubrir esta demanda. Existen, sin embargo, algunos aspectos de esta infraestructura o de la implementación de algunos de sus mecanismos que pueden ser aprovechados para vulnerar la seguridad que su uso debe garantizar. La presente investigación profundiza en alguno de estos aspectos y analiza la validez de las soluciones propuestas por grandes productores de software frente a escenarios realistas.
Resumo:
El nuevo paradigma de computación en la nube posibilita la prestación de servicios por terceros. Entre ellos, se encuentra el de las bases de datos como servicio (DaaS) que permite externalizar la gestión y alojamiento del sistema de gestión de base de datos. Si bien esto puede resultar muy beneficioso (reducción de costes, gestión simplificada, etc.), plantea algunas dificultades respecto a la funcionalidad, el rendimiento y, en especial, la seguridad de dichos servicios. En este trabajo se describen algunas de las propuestas de seguridad en sistemas DaaS existentes y se realiza un análisis de sus características principales, introduciendo un nuevo enfoque basado en tecnologías no exclusivamente relacionales (NoSQL) que presenta ventajas respecto a la escalabilidad y el rendimiento.