865 resultados para Ant-based algorithm


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The number and grade of injured neuroanatomic structures and the type of injury determine the degree of impairment after a brain injury event and the recovery options of the patient. However, the body of knowledge and clinical intervention guides are basically focused on functional disorder and they still do not take into account the location of injuries. The prognostic value of location information is not known in detail either. This paper proposes a feature-based detection algorithm, named Neuroanatomic-Based Detection Algorithm (NBDA), based on SURF (Speeded Up Robust Feature) to label anatomical brain structures on cortical and sub-cortical areas. Themain goal is to register injured neuroanatomic structures to generate a database containing patient?s structural impairment profile. This kind of information permits to establish a relation with functional disorders and the prognostic evolution during neurorehabilitation procedures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this study, we present a framework based on ant colony optimization (ACO) for tackling combinatorial problems. ACO algorithms have been applied to many diferent problems, focusing on algorithmic variants that obtain high-quality solutions. Usually, the implementations are re-done for various problem even if they maintain the same details of the ACO algorithm. However, our goal is to generate a sustainable framework for applications on permutation problems. We concentrate on understanding the behavior of pheromone trails and specific methods that can be combined. Eventually, we will propose an automatic offline configuration tool to build an efective algorithm. ---RESUMEN---En este trabajo vamos a presentar un framework basado en la familia de algoritmos ant colony optimization (ACO), los cuales están dise~nados para enfrentarse a problemas combinacionales. Los algoritmos ACO han sido aplicados a diversos problemas, centrándose los investigadores en diversas variantes que obtienen buenas soluciones. Normalmente, las implementaciones se tienen que rehacer, inclusos si se mantienen los mismos detalles para los algoritmos ACO. Sin embargo, nuestro objetivo es generar un framework sostenible para aplicaciones sobre problemas de permutaciones. Nos centraremos en comprender el comportamiento de la sendas de feromonas y ciertos métodos con los que pueden ser combinados. Finalmente, propondremos una herramienta para la configuraron automática offline para construir algoritmos eficientes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an ant colony optimization algorithm to sequence the mixed assembly lines considering the inventory and the replenishment of components. This is a NP-problem that cannot be solved to optimality by exact methods when the size of the problem growth. Groups of specialized ants are implemented to solve the different parts of the problem. This is intended to differentiate each part of the problem. Different types of pheromone structures are created to identify good car sequences, and good routes for the replenishment of components vehicle. The contribution of this paper is the collaborative approach of the ACO for the mixed assembly line and the replenishment of components and the jointly solution of the problem.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Los dispositivos móviles modernos disponen cada vez de más funcionalidad debido al rápido avance de las tecnologías de las comunicaciones y computaciones móviles. Sin embargo, la capacidad de la batería no ha experimentado un aumento equivalente. Por ello, la experiencia de usuario en los sistemas móviles modernos se ve muy afectada por la vida de la batería, que es un factor inestable de difícil de control. Para abordar este problema, investigaciones anteriores han propuesto un esquema de gestion del consumo (PM) centrada en la energía y que proporciona una garantía sobre la vida operativa de la batería mediante la gestión de la energía como un recurso de primera clase en el sistema. Como el planificador juega un papel fundamental en la administración del consumo de energía y en la garantía del rendimiento de las aplicaciones, esta tesis explora la optimización de la experiencia de usuario para sistemas móviles con energía limitada desde la perspectiva de un planificador que tiene en cuenta el consumo de energía en un contexto en el que ésta es un recurso de primera clase. En esta tesis se analiza en primer lugar los factores que contribuyen de forma general a la experiencia de usuario en un sistema móvil. Después se determinan los requisitos esenciales que afectan a la experiencia de usuario en la planificación centrada en el consumo de energía, que son el reparto proporcional de la potencia, el cumplimiento de las restricciones temporales, y cuando sea necesario, el compromiso entre la cuota de potencia y las restricciones temporales. Para cumplir con los requisitos, el algoritmo clásico de fair queueing y su modelo de referencia se extienden desde los dominios de las comunicaciones y ancho de banda de CPU hacia el dominio de la energía, y en base a ésto, se propone el algoritmo energy-based fair queueing (EFQ) para proporcionar una planificación basada en la energía. El algoritmo EFQ está diseñado para compartir la potencia consumida entre las tareas mediante su planificación en función de la energía consumida y de la cuota reservada. La cuota de consumo de cada tarea con restricciones temporales está protegida frente a diversos cambios que puedan ocurrir en el sistema. Además, para dar mejor soporte a las tareas en tiempo real y multimedia, se propone un mecanismo para combinar con el algoritmo EFQ para dar preferencia en la planificación durante breves intervalos de tiempo a las tareas más urgentes con restricciones temporales.Las propiedades del algoritmo EFQ se evaluan a través del modelado de alto nivel y la simulación. Los resultados de las simulaciones indican que los requisitos esenciales de la planificación centrada en la energía pueden lograrse. El algoritmo EFQ se implementa más tarde en el kernel de Linux. Para evaluar las propiedades del planificador EFQ basado en Linux, se desarrolló un banco de pruebas experimental basado en una sitema empotrado, un programa de banco de pruebas multihilo, y un conjunto de pruebas de código abierto. A través de experimentos específicamente diseñados, esta tesis verifica primero las propiedades de EFQ en la gestión de la cuota de consumo de potencia y la planificación en tiempo real y, a continuación, explora los beneficios potenciales de emplear la planificación EFQ en la optimización de la experiencia de usuario para sistemas móviles con energía limitada. Los resultados experimentales sobre la gestión de la cuota de energía muestran que EFQ es más eficaz que el planificador de Linux-CFS en la gestión de energía, logrando un reparto proporcional de la energía del sistema independientemente de en qué dispositivo se consume la energía. Los resultados experimentales en la planificación en tiempo real demuestran que EFQ puede lograr de forma eficaz, flexible y robusta el cumplimiento de las restricciones temporales aunque se dé el caso de aumento del el número de tareas o del error en la estimación de energía. Por último, un análisis comparativo de los resultados experimentales sobre la optimización de la experiencia del usuario demuestra que, primero, EFQ es más eficaz y flexible que los algoritmos tradicionales de planificación del procesador, como el que se encuentra por defecto en el planificador de Linux y, segundo, que proporciona la posibilidad de optimizar y preservar la experiencia de usuario para los sistemas móviles con energía limitada. Abstract Modern mobiledevices have been becoming increasingly powerful in functionality and entertainment as the next-generation mobile computing and communication technologies are rapidly advanced. However, the battery capacity has not experienced anequivalent increase. The user experience of modern mobile systems is therefore greatly affected by the battery lifetime,which is an unstable factor that is hard to control. To address this problem, previous works proposed energy-centric power management (PM) schemes to provide strong guarantee on the battery lifetime by globally managing energy as the first-class resource in the system. As the processor scheduler plays a pivotal role in power management and application performance guarantee, this thesis explores the user experience optimization of energy-limited mobile systemsfrom the perspective of energy-centric processor scheduling in an energy-centric context. This thesis first analyzes the general contributing factors of the mobile system user experience.Then itdetermines the essential requirements on the energy-centric processor scheduling for user experience optimization, which are proportional power sharing, time-constraint compliance, and when necessary, a tradeoff between the power share and the time-constraint compliance. To meet the requirements, the classical fair queuing algorithm and its reference model are extended from the network and CPU bandwidth sharing domain to the energy sharing domain, and based on that, the energy-based fair queuing (EFQ) algorithm is proposed for performing energy-centric processor scheduling. The EFQ algorithm is designed to provide proportional power shares to tasks by scheduling the tasks based on their energy consumption and weights. The power share of each time-sensitive task is protected upon the change of the scheduling environment to guarantee a stable performance, and any instantaneous power share that is overly allocated to one time-sensitive task can be fairly re-allocated to the other tasks. In addition, to better support real-time and multimedia scheduling, certain real-time friendly mechanism is combined into the EFQ algorithm to give time-limited scheduling preference to the time-sensitive tasks. Through high-level modelling and simulation, the properties of the EFQ algorithm are evaluated. The simulation results indicate that the essential requirements of energy-centric processor scheduling can be achieved. The EFQ algorithm is later implemented in the Linux kernel. To assess the properties of the Linux-based EFQ scheduler, an experimental test-bench based on an embedded platform, a multithreading test-bench program, and an open-source benchmark suite is developed. Through specifically-designed experiments, this thesis first verifies the properties of EFQ in power share management and real-time scheduling, and then, explores the potential benefits of employing EFQ scheduling in the user experience optimization for energy-limited mobile systems. Experimental results on power share management show that EFQ is more effective than the Linux-CFS scheduler in managing power shares and it can achieve a proportional sharing of the system power regardless of on which device the energy is spent. Experimental results on real-time scheduling demonstrate that EFQ can achieve effective, flexible and robust time-constraint compliance upon the increase of energy estimation error and task number. Finally, a comparative analysis of the experimental results on user experience optimization demonstrates that EFQ is more effective and flexible than traditional processor scheduling algorithms, such as those of the default Linux scheduler, in optimizing and preserving the user experience of energy-limited mobile systems.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Type 1 diabetes-mellitus implies a life-threatening absolute insulin deficiency. Artificial pancreas (CGM sensor, insulin pump and control algorithm) is promising to outperform current open-loop therapies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In contrast to traditional push-based protocols, adaptive streaming techniques like Dynamic Adaptive Streaming over HTTP (DASH) fix attention on the client, who dynamically requests different-quality portions of the content to cope with a limited and variable bandwidth but aiming at maximizing the quality perceived by the user. Since DASH adaptation logic at the client is not covered by the standard, we propose a solution based on Stochastic Dynamic Programming (SDP) techniques to find the optimal request policies that guarantee the users' Quality of Experience (QoE). Our algorithm is evaluated in a simulated streaming session and is compared with other adaptation approaches. The results show that our proposal outperforms them in terms of QoE, requesting higher qualities on average.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

One of the main problems relief teams face after a natural or man-made disaster is how to plan rural road repair work tasks to take maximum advantage of the limited available financial and human resources. Previous research focused on speeding up repair work or on selecting the location of health centers to minimize transport times for injured citizens. In spite of the good results, this research does not take into account another key factor: survivor accessibility to resources. In this paper we account for the accessibility issue, that is, we maximize the number of survivors that reach the nearest regional center (cities where economic and social activity is concentrated) in a minimum time by planning which rural roads should be repaired given the available financial and human resources. This is a combinatorial problem since the number of connections between cities and regional centers grows exponentially with the problem size, and exact methods are no good for achieving an optimum solution. In order to solve the problem we propose using an Ant Colony System adaptation, which is based on ants? foraging behavior. Ants stochastically build minimal paths to regional centers and decide if damaged roads are repaired on the basis of pheromone levels, accessibility heuristic information and the available budget. The proposed algorithm is illustrated by means of an example regarding the 2010 Haiti earthquake, and its performance is compared with another metaheuristic, GRASP.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Vector reconstruction of objects from an unstructured point cloud obtained with a LiDAR-based system (light detection and ranging) is one of the most promising methods to build three dimensional models of orchards. The cylinder fitting method for woody structure reconstruction of leafless trees from point clouds obtained with a mobile terrestrial laser scanner (MTLS) has been analysed. The advantage of this method is that it performs reconstruction in a single step. The most time consuming part of the algorithm is generation of the cylinder direction, which must be recalculated at the inclusion of each point in the cylinder. The tree skeleton is obtained at the same time as the cluster of cylinders is formed. The method does not guarantee a unique convergence and the reconstruction parameter values must be carefully chosen. A balanced processing of clusters has also been defined which has proven to be very efficient in terms of processing time by following the hierarchy of branches, predecessors and successors. The algorithm was applied to simulated MTLS of virtual orchard models and to MTLS data of real orchards. The constraints applied in the method have been reviewed to ensure better convergence and simpler use of parameters. The results obtained show a correct reconstruction of the woody structure of the trees and the algorithm runs in linear logarithmic time

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Wave energy conversion has an essential difference from other renewable energies since the dependence between the devices design and the energy resource is stronger. Dimensioning is therefore considered a key stage when a design project of Wave Energy Converters (WEC) is undertaken. Location, WEC concept, Power Take-Off (PTO) type, control strategy and hydrodynamic resonance considerations are some of the critical aspects to take into account to achieve a good performance. The paper proposes an automatic dimensioning methodology to be accomplished at the initial design project stages and the following elements are described to carry out the study: an optimization design algorithm, its objective functions and restrictions, a PTO model, as well as a procedure to evaluate the WEC energy production. After that, a parametric analysis is included considering different combinations of the key parameters previously introduced. A variety of study cases are analysed from the point of view of energy production for different design-parameters and all of them are compared with a reference case. Finally, a discussion is presented based on the results obtained, and some recommendations to face the WEC design stage are given.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL is the title of my thesis which concludes my Bachelor Degree in the Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación of the Universidad Politécnica de Madrid. It encloses the overall work I did in the Neurorobotics Research Laboratory from the Beuth Hochschule für Technik Berlin during my ERASMUS year in 2015. This thesis is focused on the field of robotics, specifically an electronic circuit called Cognitive Sensorimotor Loop (CSL) and its control algorithm based on VHDL hardware description language. The reason that makes the CSL special resides in its ability to operate a motor both as a sensor and an actuator. This way, it is possible to achieve a balanced position in any of the robot joints (e.g. the robot manages to stand) without needing any conventional sensor. In other words, the back electromotive force (EMF) induced by the motor coils is measured and the control algorithm responds depending on its magnitude. The CSL circuit contains mainly an analog-to-digital converter (ADC) and a driver. The ADC consists on a delta-sigma modulation which generates a series of bits with a certain percentage of 1's and 0's, proportional to the back EMF. The control algorithm, running in a FPGA, processes the bit frame and outputs a signal for the driver. This driver, which has an H bridge topology, gives the motor the ability to rotate in both directions while it's supplied with the power needed. The objective of this thesis is to document the experiments and overall work done on push ignoring contractive sensorimotor algorithms, meaning sensorimotor algorithms that ignore large magnitude forces (compared to gravity) applied in a short time interval on a pendulum system. This main objective is divided in two sub-objectives: (1) developing a system based on parameterized thresholds and (2) developing a system based on a push bypassing filter. System (1) contains a module that outputs a signal which blocks the main Sensorimotor algorithm when a push is detected. This module has several different parameters as inputs e.g. the back EMF increment to consider a force as a push or the time interval between samples. System (2) consists on a low-pass Infinite Impulse Response digital filter. It cuts any frequency considered faster than a certain push oscillation. This filter required an intensive study on how to implement some functions and data types (fixed or floating point data) not supported by standard VHDL packages. Once this was achieved, the next challenge was to simplify the solution as much as possible, without using non-official user made packages. Both systems behaved with a series of interesting advantages and disadvantages for the elaboration of the document. Stability, reaction time, simplicity or computational load are one of the many factors to be studied in the designed systems. RESUMEN. Development of a Sensorimotor Algorithm Able to Deal with Unforeseen Pushes and Its Implementation Based on VHDL es un Proyecto de Fin de Grado (PFG) que concluye mis estudios en la Escuela Técnica Superior de Ingeniería y Sistemas de Telecomunicación de la Universidad Politécnica de Madrid. En él se documenta el trabajo de investigación que realicé en el Neurorobotics Research Laboratory de la Beuth Hochschule für Technik Berlin durante el año 2015 mediante el programa de intercambio ERASMUS. Este PFG se centra en el campo de la robótica y en concreto en un circuito electrónico llamado Cognitive Sensorimotor Loop (CSL) y su algoritmo de control basado en lenguaje de modelado hardware VHDL. La particularidad del CSL reside en que se consigue que un motor haga las veces tanto de sensor como de actuador. De esta manera es posible que las articulaciones de un robot alcancen una posición de equilibrio (p.ej. el robot se coloca erguido) sin la necesidad de sensores en el sentido estricto de la palabra. Es decir, se mide la propia fuerza electromotriz (FEM) inducida sobre el motor y el algoritmo responde de acuerdo a su magnitud. El circuito CSL se compone de un convertidor analógico-digital (ADC) y un driver. El ADC consiste en un modulador sigma-delta, que genera una serie de bits con un porcentaje de 1's y 0's determinado, en proporción a la magnitud de la FEM inducida. El algoritmo de control, que se ejecuta en una FPGA, procesa esta cadena de bits y genera una señal para el driver. El driver, que posee una topología en puente H, provee al motor de la potencia necesaria y le otorga la capacidad de rotar en cualquiera de las dos direcciones. El objetivo de este PFG es documentar los experimentos y en general el trabajo realizado en algoritmos Sensorimotor que puedan ignorar fuerzas de gran magnitud (en comparación con la gravedad) y aplicadas en una corta ventana de tiempo. En otras palabras, ignorar empujones conservando el comportamiento original frente a la gravedad. Para ello se han desarrollado dos sistemas: uno basado en umbrales parametrizados (1) y otro basado en un filtro de corte ajustable (2). El sistema (1) contiene un módulo que, en el caso de detectar un empujón, genera una señal que bloquea el algoritmo Sensorimotor. Este módulo recibe diferentes parámetros como el incremento necesario de la FEM para que se considere un empujón o la ventana de tiempo para que se considere la existencia de un empujón. El sistema (2) consiste en un filtro digital paso-bajo de respuesta infinita que corta cualquier variación que considere un empujón. Para crear este filtro se requirió un estudio sobre como implementar ciertas funciones y tipos de datos (coma fija o flotante) no soportados por las librerías básicas de VHDL. Tras esto, el objetivo fue simplificar al máximo la solución del problema, sin utilizar paquetes de librerías añadidos. En ambos sistemas aparecen una serie de ventajas e inconvenientes de interés para el documento. La estabilidad, el tiempo de reacción, la simplicidad o la carga computacional son algunas de las muchos factores a estudiar en los sistemas diseñados. Para concluir, también han sido documentadas algunas incorporaciones a los sistemas: una interfaz visual en VGA, un módulo que compensa el offset del ADC o la implementación de una batería de faders MIDI entre otras.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present a modelling method to estimate the 3-D geometry and location of homogeneously magnetized sources from magnetic anomaly data. As input information, the procedure needs the parameters defining the magnetization vector (intensity, inclination and declination) and the Earth's magnetic field direction. When these two vectors are expected to be different in direction, we propose to estimate the magnetization direction from the magnetic map. Then, using this information, we apply an inversion approach based on a genetic algorithm which finds the geometry of the sources by seeking the optimum solution from an initial population of models in successive iterations through an evolutionary process. The evolution consists of three genetic operators (selection, crossover and mutation), which act on each generation, and a smoothing operator, which looks for the best fit to the observed data and a solution consisting of plausible compact sources. The method allows the use of non-gridded, non-planar and inaccurate anomaly data and non-regular subsurface partitions. In addition, neither constraints for the depth to the top of the sources nor an initial model are necessary, although previous models can be incorporated into the process. We show the results of a test using two complex synthetic anomalies to demonstrate the efficiency of our inversion method. The application to real data is illustrated with aeromagnetic data of the volcanic island of Gran Canaria (Canary Islands).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose and discuss a new centrality index for urban street patterns represented as networks in geographical space. This centrality measure, that we call ranking-betweenness centrality, combines the idea behind the random-walk betweenness centrality measure and the idea of ranking the nodes of a network produced by an adapted PageRank algorithm. We initially use a PageRank algorithm in which we are able to transform some information of the network that we want to analyze into numerical values. Numerical values summarizing the information are associated to each of the nodes by means of a data matrix. After running the adapted PageRank algorithm, a ranking of the nodes is obtained, according to their importance in the network. This classification is the starting point for applying an algorithm based on the random-walk betweenness centrality. A detailed example of a real urban street network is discussed in order to understand the process to evaluate the ranking-betweenness centrality proposed, performing some comparisons with other classical centrality measures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We present an extension of the logic outer-approximation algorithm for dealing with disjunctive discrete-continuous optimal control problems whose dynamic behavior is modeled in terms of differential-algebraic equations. Although the proposed algorithm can be applied to a wide variety of discrete-continuous optimal control problems, we are mainly interested in problems where disjunctions are also present. Disjunctions are included to take into account only certain parts of the underlying model which become relevant under some processing conditions. By doing so the numerical robustness of the optimization algorithm improves since those parts of the model that are not active are discarded leading to a reduced size problem and avoiding potential model singularities. We test the proposed algorithm using three examples of different complex dynamic behavior. In all the case studies the number of iterations and the computational effort required to obtain the optimal solutions is modest and the solutions are relatively easy to find.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Outliers are objects that show abnormal behavior with respect to their context or that have unexpected values in some of their parameters. In decision-making processes, information quality is of the utmost importance. In specific applications, an outlying data element may represent an important deviation in a production process or a damaged sensor. Therefore, the ability to detect these elements could make the difference between making a correct and an incorrect decision. This task is complicated by the large sizes of typical databases. Due to their importance in search processes in large volumes of data, researchers pay special attention to the development of efficient outlier detection techniques. This article presents a computationally efficient algorithm for the detection of outliers in large volumes of information. This proposal is based on an extension of the mathematical framework upon which the basic theory of detection of outliers, founded on Rough Set Theory, has been constructed. From this starting point, current problems are analyzed; a detection method is proposed, along with a computational algorithm that allows the performance of outlier detection tasks with an almost-linear complexity. To illustrate its viability, the results of the application of the outlier-detection algorithm to the concrete example of a large database are presented.