17 resultados para Slot-based task-splitting algorithms
em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco
Resumo:
Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.
Resumo:
373 p. : il., gráf., fot., tablas
Resumo:
Recently, probability models on rankings have been proposed in the field of estimation of distribution algorithms in order to solve permutation-based combinatorial optimisation problems. Particularly, distance-based ranking models, such as Mallows and Generalized Mallows under the Kendall’s-t distance, have demonstrated their validity when solving this type of problems. Nevertheless, there are still many trends that deserve further study. In this paper, we extend the use of distance-based ranking models in the framework of EDAs by introducing new distance metrics such as Cayley and Ulam. In order to analyse the performance of the Mallows and Generalized Mallows EDAs under the Kendall, Cayley and Ulam distances, we run them on a benchmark of 120 instances from four well known permutation problems. The conducted experiments showed that there is not just one metric that performs the best in all the problems. However, the statistical test pointed out that Mallows-Ulam EDA is the most stable algorithm among the studied proposals.
Resumo:
179 p.
Resumo:
This work is aimed at optimizing the wind turbine rotor speed setpoint algorithm. Several intelligent adjustment strategies have been investigated in order to improve a reward function that takes into account the power captured from the wind and the turbine speed error. After different approaches including Reinforcement Learning, the best results were obtained using a Particle Swarm Optimization (PSO)-based wind turbine speed setpoint algorithm. A reward improvement of up to 10.67% has been achieved using PSO compared to a constant approach and 0.48% compared to a conventional approach. We conclude that the pitch angle is the most adequate input variable for the turbine speed setpoint algorithm compared to others such as rotor speed, or rotor angular acceleration.
Resumo:
131 p.: graf.
Resumo:
[ES]La fibrilación ventricular (VF) es el primer ritmo registrado en el 40\,\% de las muertes súbitas por paro cardiorrespiratorio extrahospitalario (PCRE). El único tratamiento eficaz para la FV es la desfibrilación mediante una descarga eléctrica. Fuera del hospital, la descarga se administra mediante un desfibrilador externo automático (DEA), que previamente analiza el electrocardiograma (ECG) del paciente y comprueba si presenta un ritmo desfibrilable. La supervivencia en un caso de PCRE depende fundamentalmente de dos factores: la desfibrilación temprana y la resucitación cardiopulmonar (RCP) temprana, que prolonga la FV y por lo tanto la oportunidad de desfibrilación. Para un correcto análisis del ritmo cardiaco es necesario interrumpir la RCP, ya que, debido a las compresiones torácicas, la RCP introduce artefactos en el ECG. Desafortunadamente, la interrupción de la RCP afecta negativamente al éxito en la desfibrilación. En 2003 se aprobó el uso del DEA en pacientes entre 1 y 8 años. Los DEA, que originalmente se diseñaron para pacientes adultos, deben discriminar de forma precisa las arritmias pediátricas para que su uso en niños sea seguro. Varios DEAs se han adaptado para uso pediátrico, bien demostrando la precisión de los algoritmos para adultos con arritmias pediátricas, o bien mediante algoritmos específicos para arritmias pediátricas. Esta tesis presenta un nuevo algoritmo DEA diseñado conjuntamente para pacientes adultos y pediátricos. El algoritmo se ha probado exhaustivamente en bases de datos acordes a los requisitos de la American Heart Association (AHA), y en registros de resucitación con y sin artefacto RCP. El trabajo comenzó con una larga fase experimental en la que se recopilaron y clasificaron retrospectivamente un total de 1090 ritmos pediátricos. Además, se revisó una base de arritmias de adultos y se añadieron 928 nuevos ritmos de adultos. La base de datos final contiene 2782 registros, 1270 se usaron para diseñar el algoritmo y 1512 para validarlo. A continuación, se diseñó un nuevo algoritmo DEA compuesto de cuatro subalgoritmos. Estos subalgoritmos están basados en un conjunto de nuevos parámetros para la detección de arritmias, calculados en diversos dominios de la señal, como el tiempo, la frecuencia, la pendiente o la función de autocorrelación. El algoritmo cumple las exigencias de la AHA para la detección de ritmos desfibrilables y no-desfibrilables tanto en pacientes adultos como en pediátricos. El trabajo concluyó con el análisis del comportamiento del algoritmo con episodios reales de resucitación. En los ritmos que no contenían artefacto RCP se cumplieron las exigencias de la AHA. Posteriormente, se estudió la precisión del algoritmo durante las compresiones torácicas, antes y después de filtrar el artefacto RCP. Para suprimir el artefacto se utilizó un nuevo método desarrollado a lo largo de la tesis. Los ritmos desfibrilables se detectaron de forma precisa tras el filtrado, los no-desfibrilables sin embargo no.
Resumo:
Methods for generating a new population are a fundamental component of estimation of distribution algorithms (EDAs). They serve to transfer the information contained in the probabilistic model to the new generated population. In EDAs based on Markov networks, methods for generating new populations usually discard information contained in the model to gain in efficiency. Other methods like Gibbs sampling use information about all interactions in the model but are computationally very costly. In this paper we propose new methods for generating new solutions in EDAs based on Markov networks. We introduce approaches based on inference methods for computing the most probable configurations and model-based template recombination. We show that the application of different variants of inference methods can increase the EDAs’ convergence rate and reduce the number of function evaluations needed to find the optimum of binary and non-binary discrete functions.
Resumo:
The work presented here is part of a larger study to identify novel technologies and biomarkers for early Alzheimer disease (AD) detection and it focuses on evaluating the suitability of a new approach for early AD diagnosis by non-invasive methods. The purpose is to examine in a pilot study the potential of applying intelligent algorithms to speech features obtained from suspected patients in order to contribute to the improvement of diagnosis of AD and its degree of severity. In this sense, Artificial Neural Networks (ANN) have been used for the automatic classification of the two classes (AD and control subjects). Two human issues have been analyzed for feature selection: Spontaneous Speech and Emotional Response. Not only linear features but also non-linear ones, such as Fractal Dimension, have been explored. The approach is non invasive, low cost and without any side effects. Obtained experimental results were very satisfactory and promising for early diagnosis and classification of AD patients.
Resumo:
Computer vision algorithms that use color information require color constant images to operate correctly. Color constancy of the images is usually achieved in two steps: first the illuminant is detected and then image is transformed with the chromatic adaptation transform ( CAT). Existing CAT methods use a single transformation matrix for all the colors of the input image. The method proposed in this paper requires multiple corresponding color pairs between source and target illuminants given by patches of the Macbeth color checker. It uses Delaunay triangulation to divide the color gamut of the input image into small triangles. Each color of the input image is associated with the triangle containing the color point and transformed with a full linear model associated with the triangle. Full linear model is used because diagonal models are known to be inaccurate if channel color matching functions do not have narrow peaks. Objective evaluation showed that the proposed method outperforms existing CAT methods by more than 21%; that is, it performs statistically significantly better than other existing methods.
Resumo:
This paper describes Mateda-2.0, a MATLAB package for estimation of distribution algorithms (EDAs). This package can be used to solve single and multi-objective discrete and continuous optimization problems using EDAs based on undirected and directed probabilistic graphical models. The implementation contains several methods commonly employed by EDAs. It is also conceived as an open package to allow users to incorporate different combinations of selection, learning, sampling, and local search procedures. Additionally, it includes methods to extract, process and visualize the structures learned by the probabilistic models. This way, it can unveil previously unknown information about the optimization problem domain. Mateda-2.0 also incorporates a module for creating and validating function models based on the probabilistic models learned by EDAs.
Resumo:
Attempts to model any present or future power grid face a huge challenge because a power grid is a complex system, with feedback and multi-agent behaviors, integrated by generation, distribution, storage and consumption systems, using various control and automation computing systems to manage electricity flows. Our approach to modeling is to build upon an established model of the low voltage electricity network which is tested and proven, by extending it to a generalized energy model. But, in order to address the crucial issues of energy efficiency, additional processes like energy conversion and storage, and further energy carriers, such as gas, heat, etc., besides the traditional electrical one, must be considered. Therefore a more powerful model, provided with enhanced nodes or conversion points, able to deal with multidimensional flows, is being required. This article addresses the issue of modeling a local multi-carrier energy network. This problem can be considered as an extension of modeling a low voltage distribution network located at some urban or rural geographic area. But instead of using an external power flow analysis package to do the power flow calculations, as used in electric networks, in this work we integrate a multiagent algorithm to perform the task, in a concurrent way to the other simulation tasks, and not only for the electric fluid but also for a number of additional energy carriers. As the model is mainly focused in system operation, generation and load models are not developed.
Resumo:
211 p. :il.
Resumo:
4 p.
Resumo:
221 p.