974 resultados para LMS Structure, Ternary Filtering, Algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A high-sample rate 3D median filtering processor architecture is proposed, based on a novel 3D median filtering algorithm, that can reduce the computing complexity in comparison with the traditional bubble sorting algorithm. A 3 x 3 x 3 filter processor is implemented in VHDL, and the simulation verifies that the processor can process a 128 x 128 x 96 MRI image in 0.03 seconds while running at 50 MHz.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we address the problem of decentralized and robust linear filtering for target tracking using networks of (radar) sensors taking nonlinear range and bearing measurements. The algorithm introduced in this paper permits efficient data fusion from multiple sensors through a summation style fusion architecture. Moreover, we prove that the state estimation error for the linear filtering algorithm is bounded.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The phenomenal behaviour and composition of human cognition is yet to be defined comprehensibly. Developing the same, artificially, is a foremost research area in artificial intelligence and related fields. In this chapter we look at advances made in the unsupervised learning paradigm (self organising methods) and its potential in realising artificial cognitive machines. The first section delineates intricacies of the process of learning in humans with an articulate discussion of the function of thought and the function of memory. The self organising method and the biological rationalisations that led to its development are explored in the second section. The next focus is the effect of structure restrictions on unsupervised learning and the enhancements resulting from a structure adapting learning algorithm. Generation of a hierarchy of knowledge using this algorithm will also be discussed. Section four looks at new means of knowledge acquisition through this adaptive unsupervised learning algorithm while the fifth examines the contribution of multimodal representation of inputs to unsupervised learning. The chapter concludes with a summary of the extensions outlined.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within a metacommunity, both environmental and spatial processes regulate variation in local community structure. The strength of these processes may vary depending on species traits (e.g., dispersal mode) or the characteristics of the regions studied (e.g., spatial extent, environmental heterogeneity). We studied the metacommunity structuring of three groups of stream macroinvertebrates differing in their overland dispersal mode (passive dispersers with aquatic adults; passive dispersers with terrestrial adults; active dispersers with terrestrial adults). We predicted that environmental structuring should be more important for active dispersers, because of their better ability to track environmental variability, and that spatial structuring should be more important for species with aquatic adults, because of stronger dispersal limitation. We sampled a total of 70 stream riffle sites in three drainage basins. Environmental heterogeneity was unrelated to spatial extent among our study regions, allowing us to examine the effects of these two factors on metacommunity structuring. We used partial redundancy analysis and Moran's eigenvector maps based on overland and watercourse distances to study the relative importance of environmental control and spatial structuring. We found that, compared with environmental control, spatial structuring was generally negligible, and it did not vary according to our predictions. In general, active dispersers with terrestrial adults showed stronger environmental control than the two passively dispersing groups, suggesting that the species dispersing actively are better able to track environmental variability. There were no clear differences in the results based on watercourse and overland distances. The variability in metacommunity structuring among basins was not related to the differences in the environmental heterogeneity and spatial extent. Our study emphasized that (1) environmental control is prevailing in stream metacommunities, (2) dispersal mode may have an important effect on metacommunity structuring, and (3) some factors other than spatial extent or environmental heterogeneity contributed to the differences among the basins.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, a tabu search algorithm for solving uncapacitated location problems is presented. The uncapacitated location problem is a classic problem of localization and occurs in many practical situations. The problem consists in determining in a network, at the minimum possible cost, the better localization, in a network, for the installation of facilities in order to attend the customers' associated demands, at the minimum possible cost. One admits that there exists a cost associated with the opening of a facility and a cost of attendance of each customer by any open facilities. In the particular case of the uncapacitated location problem there is no capacity limitation to attend the customers’ demands. There are some parameters in the algorithm that influence the solution’s quality. These parameters were tested and optimal values for them were obtained. The results show that the proposed algorithm is able to find the optimal solution for all small tested problems keeping the compromise between solution’s quality and computational time. However, to solve bigger problems, the structure of the algorithm must be changed in its structure. The implemented algorithm is integrated to a computational platform for solution of logistic problems

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work studies the optimization and control of a styrene polymerization reactor. The proposed strategy deals with the case where, because of market conditions and equipment deterioration, the optimal operating point of the continuous reactor is modified significantly along the operation time and the control system has to search for this optimum point, besides keeping the reactor system stable at any possible point. The approach considered here consists of three layers: the Real Time Optimization (RTO), the Model Predictive Control (MPC) and a Target Calculation (TC) that coordinates the communication between the two other layers and guarantees the stability of the whole structure. The proposed algorithm is simulated with the phenomenological model of a styrene polymerization reactor, which has been widely used as a benchmark for process control. The complete optimization structure for the styrene process including disturbances rejection is developed. The simulation results show the robustness of the proposed strategy and the capability to deal with disturbances while the economic objective is optimized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work studies the optimization and control of a styrene polymerization reactor. The proposed strategy deals with the case where, because of market conditions and equipment deterioration, the optimal operating point of the continuous reactor is modified significantly along the operation time and the control system has to search for this optimum point, besides keeping the reactor system stable at any possible point. The approach considered here consists of three layers: the Real Time Optimization (RTO), the Model Predictive Control (MPC) and a Target Calculation (TC) that coordinates the communication between the two other layers and guarantees the stability of the whole structure. The proposed algorithm is simulated with the phenomenological model of a styrene polymerization reactor, which has been widely used as a benchmark for process control. The complete optimization structure for the styrene process including disturbances rejection is developed. The simulation results show the robustness of the proposed strategy and the capability to deal with disturbances while the economic objective is optimized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual fixation is employed by humans and some animals to keep a specific 3D location at the center of the visual gaze. Inspired by this phenomenon in nature, this paper explores the idea to transfer this mechanism to the context of video stabilization for a handheld video camera. A novel approach is presented that stabilizes a video by fixating on automatically extracted 3D target points. This approach is different from existing automatic solutions that stabilize the video by smoothing. To determine the 3D target points, the recorded scene is analyzed with a stateof- the-art structure-from-motion algorithm, which estimates camera motion and reconstructs a 3D point cloud of the static scene objects. Special algorithms are presented that search either virtual or real 3D target points, which back-project close to the center of the image for as long a period of time as possible. The stabilization algorithm then transforms the original images of the sequence so that these 3D target points are kept exactly in the center of the image, which, in case of real 3D target points, produces a perfectly stable result at the image center. Furthermore, different methods of additional user interaction are investigated. It is shown that the stabilization process can easily be controlled and that it can be combined with state-of-theart tracking techniques in order to obtain a powerful image stabilization tool. The approach is evaluated on a variety of videos taken with a hand-held camera in natural scenes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A highly parallel and scalable Deblocking Filter (DF) hardware architecture for H.264/AVC and SVC video codecs is presented in this paper. The proposed architecture mainly consists on a coarse grain systolic array obtained by replicating a unique and homogeneous Functional Unit (FU), in which a whole Deblocking-Filter unit is implemented. The proposal is also based on a novel macroblock-level parallelization strategy of the filtering algorithm which improves the final performance by exploiting specific data dependences. This way communication overhead is reduced and a more intensive parallelism in comparison with the existing state-of-the-art solutions is obtained. Furthermore, the architecture is completely flexible, since the level of parallelism can be changed, according to the application requirements. The design has been implemented in a Virtex-5 FPGA, and it allows filtering 4CIF (704 × 576 pixels @30 fps) video sequences in real-time at frequencies lower than 10.16 Mhz.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This doctoral thesis focuses on the modeling of multimedia systems to create personalized recommendation services based on the analysis of users’ audiovisual consumption. Research is focused on the characterization of both users’ audiovisual consumption and content, specifically images and video. This double characterization converges into a hybrid recommendation algorithm, adapted to different application scenarios covering different specificities and constraints. Hybrid recommendation systems use both content and user information as input data, applying the knowledge from the analysis of these data as the initial step to feed the algorithms in order to generate personalized recommendations. Regarding the user information, this doctoral thesis focuses on the analysis of audiovisual consumption to infer implicitly acquired preferences. The inference process is based on a new probabilistic model proposed in the text. This model takes into account qualitative and quantitative consumption factors on the one hand, and external factors such as zapping factor or company factor on the other. As for content information, this research focuses on the modeling of descriptors and aesthetic characteristics, which influence the user and are thus useful for the recommendation system. Similarly, the automatic extraction of these descriptors from the audiovisual piece without excessive computational cost has been considered a priority, in order to ensure applicability to different real scenarios. Finally, a new content-based recommendation algorithm has been created from the previously acquired information, i.e. user preferences and content descriptors. This algorithm has been hybridized with a collaborative filtering algorithm obtained from the current state of the art, so as to compare the efficiency of this hybrid recommender with the individual techniques of recommendation (different hybridization techniques of the state of the art have been studied for suitability). The content-based recommendation focuses on the influence of the aesthetic characteristics on the users. The heterogeneity of the possible users of these kinds of systems calls for the use of different criteria and attributes to create effective recommendations. Therefore, the proposed algorithm is adaptable to different perceptions producing a dynamic representation of preferences to obtain personalized recommendations for each user of the system. The hypotheses of this doctoral thesis have been validated by conducting a set of tests with real users, or by querying a database containing user preferences - available to the scientific community. This thesis is structured based on the different research and validation methodologies of the techniques involved. In the three central chapters the state of the art is studied and the developed algorithms and models are validated via self-designed tests. It should be noted that some of these tests are incremental and confirm the validation of previously discussed techniques. Resumen Esta tesis doctoral se centra en el modelado de sistemas multimedia para la creación de servicios personalizados de recomendación a partir del análisis de la actividad de consumo audiovisual de los usuarios. La investigación se focaliza en la caracterización tanto del consumo audiovisual del usuario como de la naturaleza de los contenidos, concretamente imágenes y vídeos. Esta doble caracterización de usuarios y contenidos confluye en un algoritmo de recomendación híbrido que se adapta a distintos escenarios de aplicación, cada uno de ellos con distintas peculiaridades y restricciones. Todo sistema de recomendación híbrido toma como datos de partida tanto información del usuario como del contenido, y utiliza este conocimiento como entrada para algoritmos que permiten generar recomendaciones personalizadas. Por la parte de la información del usuario, la tesis se centra en el análisis del consumo audiovisual para inferir preferencias que, por lo tanto, se adquieren de manera implícita. Para ello, se ha propuesto un nuevo modelo probabilístico que tiene en cuenta factores de consumo tanto cuantitativos como cualitativos, así como otros factores de contorno, como el factor de zapping o el factor de compañía, que condicionan la incertidumbre de la inferencia. En cuanto a la información del contenido, la investigación se ha centrado en la definición de descriptores de carácter estético y morfológico que resultan influyentes en el usuario y que, por lo tanto, son útiles para la recomendación. Del mismo modo, se ha considerado una prioridad que estos descriptores se puedan extraer automáticamente de un contenido sin exigir grandes requisitos computacionales y, de tal forma que se garantice la posibilidad de aplicación a escenarios reales de diverso tipo. Por último, explotando la información de preferencias del usuario y de descripción de los contenidos ya obtenida, se ha creado un nuevo algoritmo de recomendación basado en contenido. Este algoritmo se cruza con un algoritmo de filtrado colaborativo de referencia en el estado del arte, de tal manera que se compara la eficiencia de este recomendador híbrido (donde se ha investigado la idoneidad de las diferentes técnicas de hibridación del estado del arte) con cada una de las técnicas individuales de recomendación. El algoritmo de recomendación basado en contenido que se ha creado se centra en las posibilidades de la influencia de factores estéticos en los usuarios, teniendo en cuenta que la heterogeneidad del conjunto de usuarios provoca que los criterios y atributos que condicionan las preferencias de cada individuo sean diferentes. Por lo tanto, el algoritmo se adapta a las diferentes percepciones y articula una metodología dinámica de representación de las preferencias que permite obtener recomendaciones personalizadas, únicas para cada usuario del sistema. Todas las hipótesis de la tesis han sido debidamente validadas mediante la realización de pruebas con usuarios reales o con bases de datos de preferencias de usuarios que están a disposición de la comunidad científica. La diferente metodología de investigación y validación de cada una de las técnicas abordadas condiciona la estructura de la tesis, de tal manera que los tres capítulos centrales se estructuran sobre su propio estudio del estado del arte y los algoritmos y modelos desarrollados se validan mediante pruebas autónomas, sin impedir que, en algún caso, las pruebas sean incrementales y ratifiquen la validación de técnicas expuestas anteriormente.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hoy en día, el desarrollo tecnológico en el campo de los sistemas inteligentes de transporte (ITS por sus siglas en inglés) ha permitido dotar a los vehículos con diversos sistemas de ayuda a la conducción (ADAS, del inglés advanced driver assistance system), mejorando la experiencia y seguridad de los pasajeros, en especial del conductor. La mayor parte de estos sistemas están pensados para advertir al conductor sobre ciertas situaciones de riesgo, como la salida involuntaria del carril o la proximidad de obstáculos en el camino. No obstante, también podemos encontrar sistemas que van un paso más allá y son capaces de cooperar con el conductor en el control del vehículo o incluso relegarlos de algunas tareas tediosas. Es en este último grupo donde se encuentran los sistemas de control electrónico de estabilidad (ESP - Electronic Stability Program), el antibloqueo de frenos (ABS - Anti-lock Braking System), el control de crucero (CC - Cruise Control) y los más recientes sistemas de aparcamiento asistido. Continuando con esta línea de desarrollo, el paso siguiente consiste en la supresión del conductor humano, desarrollando sistemas que sean capaces de conducir un vehículo de forma autónoma y con un rendimiento superior al del conductor. En este trabajo se presenta, en primer lugar, una arquitectura de control para la automatización de vehículos. Esta se compone de distintos componentes de hardware y software, agrupados de acuerdo a su función principal. El diseño de la arquitectura parte del trabajo previo desarrollado por el Programa AUTOPIA, aunque introduce notables aportaciones en cuanto a la eficiencia, robustez y escalabilidad del sistema. Ahondando un poco más en detalle, debemos resaltar el desarrollo de un algoritmo de localización basado en enjambres de partículas. Este está planteado como un método de filtrado y fusión de la información obtenida a partir de los distintos sensores embarcados en el vehículo, entre los que encontramos un receptor GPS (Global Positioning System), unidades de medición inercial (IMU – Inertial Measurement Unit) e información tomada directamente de los sensores embarcados por el fabricante, como la velocidad de las ruedas y posición del volante. Gracias a este método se ha conseguido resolver el problema de la localización, indispensable para el desarrollo de sistemas de conducción autónoma. Continuando con el trabajo de investigación, se ha estudiado la viabilidad de la aplicación de técnicas de aprendizaje y adaptación al diseño de controladores para el vehículo. Como punto de partida se emplea el método de Q-learning para la generación de un controlador borroso lateral sin ningún tipo de conocimiento previo. Posteriormente se presenta un método de ajuste on-line para la adaptación del control longitudinal ante perturbaciones impredecibles del entorno, como lo son los cambios en la inclinación del camino, fricción de las ruedas o peso de los ocupantes. Para finalizar, se presentan los resultados obtenidos durante un experimento de conducción autónoma en carreteras reales, el cual se llevó a cabo en el mes de Junio de 2012 desde la población de San Lorenzo de El Escorial hasta las instalaciones del Centro de Automática y Robótica (CAR) en Arganda del Rey. El principal objetivo tras esta demostración fue validar el funcionamiento, robustez y capacidad de la arquitectura propuesta para afrontar el problema de la conducción autónoma, bajo condiciones mucho más reales a las que se pueden alcanzar en las instalaciones de prueba. ABSTRACT Nowadays, the technological advances in the Intelligent Transportation Systems (ITS) field have led the development of several driving assistance systems (ADAS). These solutions are designed to improve the experience and security of all the passengers, especially the driver. For most of these systems, the main goal is to warn drivers about unexpected circumstances leading to risk situations such as involuntary lane departure or proximity to other vehicles. However, other ADAS go a step further, being able to cooperate with the driver in the control of the vehicle, or even overriding it on some tasks. Examples of this kind of systems are the anti-lock braking system (ABS), cruise control (CC) and the recently commercialised assisted parking systems. Within this research line, the next step is the development of systems able to replace the human drivers, improving the control and therefore, the safety and reliability of the vehicles. First of all, this dissertation presents a control architecture design for autonomous driving. It is made up of several hardware and software components, grouped according to their main function. The design of this architecture is based on the previous works carried out by the AUTOPIA Program, although notable improvements have been made regarding the efficiency, robustness and scalability of the system. It is also remarkable the work made on the development of a location algorithm for vehicles. The proposal is based on the emulation of the behaviour of biological swarms and its performance is similar to the well-known particle filters. The developed method combines information obtained from different sensors, including GPS, inertial measurement unit (IMU), and data from the original vehicle’s sensors on-board. Through this filtering algorithm the localization problem is properly managed, which is critical for the development of autonomous driving systems. The work deals also with the fuzzy control tuning system, a very time consuming task when done manually. An analysis of learning and adaptation techniques for the development of different controllers has been made. First, the Q-learning –a reinforcement learning method– has been applied to the generation of a lateral fuzzy controller from scratch. Subsequently, the development of an adaptation method for longitudinal control is presented. With this proposal, a final cruise control controller is able to deal with unpredictable environment disturbances, such as road slope, wheel’s friction or even occupants’ weight. As a testbed for the system, an autonomous driving experiment on real roads is presented. This experiment was carried out on June 2012, driving from San Lorenzo de El Escorial up to the Center for Automation and Robotics (CAR) facilities in Arganda del Rey. The main goal of the demonstration was validating the performance, robustness and viability of the proposed architecture to deal with the problem of autonomous driving under more demanding conditions than those achieved on closed test tracks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La importancia de los sistemas de recomendación ha experimentado un crecimiento exponencial como consecuencia del auge de las redes sociales. En esta tesis doctoral presentaré una amplia visión sobre el estado del arte de los sistemas de recomendación. Incialmente, estos estaba basados en fitrado demográfico, basado en contendio o colaborativo. En la actualidad, estos sistemas incorporan alguna información social al proceso de recomendación. En el futuro utilizarán información implicita, local y personal proveniente del Internet de las cosas. Los sistemas de recomendación basados en filtrado colaborativo se pueden modificar con el fin de realizar recomendaciones a grupos de usuarios. Existen trabajos previos que han incluido estas modificaciones en diferentes etapas del algoritmo de filtrado colaborativo: búsqueda de los vecinos, predicción de las votaciones y elección de las recomendaciones. En esta tesis doctoral proporcionaré un nuevo método que realizar el proceso de unficación (pasar de varios usuarios a un grupo) en el primer paso del algoritmo de filtrado colaborativo: cálculo de la métrica de similaridad. Proporcionaré una formalización completa del método propuesto. Explicaré cómo obtener el conjunto de k vecinos del grupo de usuarios y mostraré cómo obtener recomendaciones usando dichos vecinos. Asimismo, incluiré un ejemplo detallando cada paso del método propuesto en un sistema de recomendación compuesto por 8 usuarios y 10 items. Las principales características del método propuesto son: (a) es más rápido (más eficiente) que las alternativas proporcionadas por otros autores, y (b) es al menos tan exacto y preciso como otras soluciones estudiadas. Para contrastar esta hipótesis realizaré varios experimentos que miden la precisión, la exactitud y el rendimiento del método. Los resultados obtenidos se compararán con los resultados de otras alternativas utilizadas en la recomendación de grupos. Los experimentos se realizarán con las bases de datos de MovieLens y Netflix. ABSTRACT The importance of recommender systems has grown exponentially with the advent of social networks. In this PhD thesis I will provide a wide vision about the state of the art of recommender systems. They were initially based on demographic, contentbased and collaborative filtering. Currently, these systems incorporate some social information to the recommendation process. In the future, they will use implicit, local and personal information from the Internet of Things. As we will see here, recommender systems based on collaborative filtering can be used to perform recommendations to group of users. Previous works have made this modification in different stages of the collaborative filtering algorithm: establishing the neighborhood, prediction phase and determination of recommended items. In this PhD thesis I will provide a new method that carry out the unification process (many users to one group) in the first stage of the collaborative filtering algorithm: similarity metric computation. I will provide a full formalization of the proposed method. I will explain how to obtain the k nearest neighbors of the group of users and I will show how to get recommendations using those users. I will also include a running example of a recommender system with 8 users and 10 items detailing all the steps of the method I will present. The main highlights of the proposed method are: (a) it will be faster (more efficient) that the alternatives provided by other authors, and (b) it will be at least as precise and accurate as other studied solutions. To check this hypothesis I will conduct several experiments measuring the accuracy, the precision and the performance of my method. I will compare these results with the results generated by other methods of group recommendation. The experiments will be carried out using MovieLens and Netflix datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of obtaining 3d detailed reconstructions of human faces in real-time and with inexpensive hardware. We present an algorithm based on a monocular multi-spectral photometric-stereo setup. This system is known to capture high-detailed deforming 3d surfaces at high frame rates and without having to use any expensive hardware or synchronized light stage. However, the main challenge of such a setup is the calibration stage, which depends on the lights setup and how they interact with the specific material being captured, in this case, human faces. For this purpose we develop a self-calibration technique where the person being captured is asked to perform a rigid motion in front of the camera, maintaining a neutral expression. Rigidity constrains are then used to compute the head's motion with a structure-from-motion algorithm. Once the motion is obtained, a multi-view stereo algorithm reconstructs a coarse 3d model of the face. This coarse model is then used to estimate the lighting parameters with a stratified approach: In the first step we use a RANSAC search to identify purely diffuse points on the face and to simultaneously estimate this diffuse reflectance model. In the second step we apply non-linear optimization to fit a non-Lambertian reflectance model to the outliers of the previous step. The calibration procedure is validated with synthetic and real data.