72 resultados para Particle swarm optimization algorithm PSO


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many macroscopic properties: hardness, corrosion, catalytic activity, etc. are directly related to the surface structure, that is, to the position and chemical identity of the outermost atoms of the material. Current experimental techniques for its determination produce a signature from which the structure must be inferred by solving an inverse problem: a solution is proposed, its corresponding signature computed and then compared to the experiment. This is a challenging optimization problem where the search space and the number of local minima grows exponentially with the number of atoms, hence its solution cannot be achieved for arbitrarily large structures. Nowadays, it is solved by using a mixture of human knowledge and local search techniques: an expert proposes a solution that is refined using a local minimizer. If the outcome does not fit the experiment, a new solution must be proposed again. Solving a small surface can take from days to weeks of this trial and error method. Here we describe our ongoing work in its solution. We use an hybrid algorithm that mixes evolutionary techniques with trusted region methods and reuses knowledge gained during the execution to avoid repeated search of structures. Its parallelization produces good results even when not requiring the gathering of the full population, hence it can be used in loosely coupled environments such as grids. With this algorithm, the solution of test cases that previously took weeks of expert time can be automatically solved in a day or two of uniprocessor time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we consider the Minimum Weight Pseudo-Triangulation (MWPT) problem of a given set of n points in the plane. Globally optimal pseudo-triangulations with respect to the weight, as optimization criteria, are difficult to be found by deterministic methods, since no polynomial algorithm is known. We show how the Ant Colony Optimization (ACO) metaheuristic can be used to find high quality pseudo-triangulations of minimum weight. We present the experimental and statistical study based on our own set of instances since no reference to benchmarks for these problems were found in the literature. Throughout the experimental evaluation, we appraise the ACO metaheuristic performance for MWPT problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper focuses on the general problem of coordinating multiple robots. More specifically, it addresses the self-election of heterogeneous specialized tasks by autonomous robots. In this paper we focus on a specifically distributed or decentralized approach as we are particularly interested on decentralized solution where the robots themselves autonomously and in an individual manner, are responsible of selecting a particular task so that all the existing tasks are optimally distributed and executed. In this regard, we have established an experimental scenario to solve the corresponding multi-tasks distribution problem and we propose a solution using two different approaches by applying Ant Colony Optimization-based deterministic algorithms as well as Learning Automata-based probabilistic algorithms. We have evaluated the robustness of the algorithm, perturbing the number of pending loads to simulate the robots error in estimating the real number of pending tasks and also the dynamic generation of loads through time. The paper ends with a critical discussion of experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the problem of determining the position of beacon nodes in Local Positioning Systems (LPSs), for which there are no inter-beacon distance measurements available and neither the mobile node nor any of the stationary nodes have positioning or odometry information. The common solution is implemented using a mobile node capable of measuring its distance to the stationary beacon nodes within a sensing radius. Many authors have implemented heuristic methods based on optimization algorithms to solve the problem. However, such methods require a good initial estimation of the node positions in order to find the correct solution. In this paper we present a new method to calculate the inter-beacon distances, and hence the beacons positions, based in the linearization of the trilateration equations into a closed-form solution which does not require any approximate initial estimation. The simulations and field evaluations show a good estimation of the beacon node positions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La influencia de la aerodinmica en el diseo de los trenes de alta velocidad, unida a la necesidad de resolver nuevos problemas surgidos con el aumento de la velocidad de circulacin y la reduccin de peso del vehculo, hace evidente el inters de plantear un estudio de optimizacin que aborde tales puntos. En este contexto, se presenta en esta tesis la optimizacin aerodinmica del testero de un tren de alta velocidad, llevada a cabo mediante el uso de mtodos de optimizacin avanzados. Entre estos mtodos, se ha elegido aqu a los algoritmos genticos y al mtodo adjunto como las herramientas para llevar a cabo dicha optimizacin. La base conceptual, las caractersticas y la implementacin de los mismos se detalla a lo largo de la tesis, permitiendo entender los motivos de su eleccin, y las consecuencias, en trminos de ventajas y desventajas que cada uno de ellos implican. El uso de los algorimos genticos implica a su vez la necesidad de una parametrizacin geomtrica de los candidatos a ptimo y la generacin de un modelo aproximado que complementa al mtodo de optimizacin. Estos puntos se describen de modo particular en el primer bloque de la tesis, enfocada a la metodologa seguida en este estudio. El segundo bloque se centra en la aplicacin de los mtodos a fin de optimizar el comportamiento aerodinmico del tren en distintos escenarios. Estos escenarios engloban los casos ms comunes y tambin algunos de los ms exigentes a los que hace frente un tren de alta velocidad: circulacin en campo abierto con viento frontal o viento lateral, y entrada en tnel. Considerando el caso de viento frontal en campo abierto, los dos mtodos han sido aplicados, permitiendo una comparacin de las diferentes metodologas, as como el coste computacional asociado a cada uno, y la minimizacin de la resistencia aerodinmica conseguida en esa optimizacin. La posibilidad de evitar parametrizar la geometra y, por tanto, reducir el coste computacional del proceso de optimizacin es la caracterstica ms significativa de los mtodos adjuntos, mientras que en el caso de los algoritmos genticos se destaca la simplicidad y capacidad de encontrar un ptimo global en un espacio de diseo multi-modal o de resolver problemas multi-objetivo. El caso de viento lateral en campo abierto considera nuevamente los dos mtoxi dos de optimizacin anteriores. La parametrizacin se ha simplificado en este estudio, lo que notablemente reduce el coste numrico de todo el estudio de optimizacin, a la vez que an recoge las caractersticas geomtricas ms relevantes en un tren de alta velocidad. Este anlisis ha permitido identificar y cuantificar la influencia de cada uno de los parmetros geomtricos includos en la parametrizacin, y se ha observado que el diseo de la arista superior a barlovento es fundamental, siendo su influencia mayor que la longitud del testero o que la seccin frontal del mismo. Finalmente, se ha considerado un escenario ms a fin de validar estos mtodos y su capacidad de encontrar un ptimo global. La entrada de un tren de alta velocidad en un tnel es uno de los casos ms exigentes para un tren por el pico de sobrepresin generado, el cual afecta a la confortabilidad del pasajero, as como a la estabilidad del vehculo y al entorno prximo a la salida del tnel. Adems de este problema, otro objetivo a minimizar es la resistencia aerodinmica, notablemente superior al caso de campo abierto. Este problema se resuelve usando algoritmos genticos. Dicho mtodo permite obtener un frente de Pareto donde se incluyen el conjunto de ptimos que minimizan ambos objetivos. ABSTRACT Aerodynamic design of trains influences several aspects of high-speed trains performance in a very significant level. In this situation, considering also that new aerodynamic problems have arisen due to the increase of the cruise speed and lightness of the vehicle, it is evident the necessity of proposing an optimization study concerning the train aerodynamics. Thus, the aerodynamic optimization of the nose shape of a high-speed train is presented in this thesis. This optimization is based on advanced optimization methods. Among these methods, genetic algorithms and the adjoint method have been selected. A theoretical description of their bases, the characteristics and the implementation of each method is detailed in this thesis. This introduction permits understanding the causes of their selection, and the advantages and drawbacks of their application. The genetic algorithms requirethe geometrical parameterization of any optimal candidate and the generation of a metamodel or surrogate model that complete the optimization process. These points are addressed with a special attention in the first block of the thesis, focused on the methodology considered in this study. The second block is referred to the use of these methods with the purpose of optimizing the aerodynamic performance of a high-speed train in several scenarios. These scenarios englobe the most representative operating conditions of high-speed trains, and also some of the most exigent train aerodynamic problems: front wind and cross-wind situations in open air, and the entrance of a high-speed train in a tunnel. The genetic algorithms and the adjoint method have been applied in the minimization of the aerodynamic drag on the train with front wind in open air. The comparison of these methods allows to evaluate the methdology and computational cost of each one, as well as the resulting minimization of the aerodynamic drag. Simplicity and robustness, the straightforward realization of a multi-objective optimization, and the capability of searching a global optimum are the main attributes of genetic algorithm. However, the requirement of geometrically parameterize any optimal candidate is a significant drawback that is avoided with the use of the adjoint method. This independence of the number of design variables leads to a relevant reduction of the pre-processing and computational cost. Considering the cross-wind stability, both methods are used again for the minimization of the side force. In this case, a simplification of the geometric parameterization of the train nose is adopted, what dramatically reduces the computational cost of the optimization process. Nevertheless, some of the most important geometrical characteristics are still described with this simplified parameterization. This analysis identifies and quantifies the influence of each design variable on the side force on the train. It is observed that the A-pillar roundness is the most demanding design parameter, with a more important effect than the nose length or the train cross-section area. Finally, a third scenario is considered for the validation of these methods in the aerodynamic optimization of a high-speed train. The entrance of a train in a tunnel is one of the most exigent train aerodynamic problems. The aerodynamic consequences of high-speed trains running in a tunnel are basically resumed in two correlated phenomena, the generation of pressure waves and an increase in aerodynamic drag. This multi-objective optimization problem is solved with genetic algorithms. The result is a Pareto front where a set of optimal solutions that minimize both objectives.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demostrated using experimental data obtained on osmotic dehydratation of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses), namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality). Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP) and the Tabular Method (TM), were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

n this paper, we present a theoretical model based on the detailed balance theory of solar thermophotovoltaic systems comprising multijunction photovoltaic cells, a sunlight concentrator and spectrally selective surfaces. The full system has been defined by means of 2n+8 variables (being n the number of sub-cells of the multijunction cell). These variables are as follows: the sunlight concentration factor, the absorber cut-off energy, the emitter-to-absorber area ratio, the emitter cut-off energy, the band-gap energy(ies) and voltage(s) of the sub-cells, the reflectivity of the cells' back-side reflector, the emitter-to-cell and cell-to-cell view factors and the emitter-to-cell area ratio. We have used this model for carrying out a multi-variable system optimization by means of a multidimensional direct-search algorithm. This analysis allows to find the set of system variables whose combined effects results in the maximum overall system efficiency. From this analysis, we have seen that multijunction cells are excellent candidates to enhance the system efficiency and the electrical power density. Particularly, multijunction cells report great benefits for systems with a notable presence of optical losses, which are unavoidable in practical systems. Also, we have seen that the use of spectrally selective absorbers, rather than black-body absorbers, allows to achieve higher system efficiencies for both lower concentration and lower emitter-to-absorber area ratio. Finally, we have seen that sun-to-electricity conversion efficiencies above 30% and electrical power densities above 50W/cm2 are achievable for this kind of systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A genetic algorithm (GA) is employed for the multi-objective shape optimization of the nose of a high-speed train. Aerodynamic problems observed at high speeds become still more relevant when traveling along a tunnel. The objective is to minimize both the aerodynamic drag and the amplitude of the pressure gradient of the compression wave when a train enters a tunnel. The main drawback of GA is the large number of evaluations need in the optimization process. Metamodels-based optimization is considered to overcome such problem. As a result, an explicit relationship between pressure gradient and geometrical parameters is obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a novel framework for the analysis and optimization of encoding latency for multiview video. Firstly, we characterize the elements that have an influence in the encoding latency performance: (i) the multiview prediction structure and (ii) the hardware encoder model. Then, we provide algorithms to find the encoding latency of any arbitrary multiview prediction structure. The proposed framework relies on the directed acyclic graph encoder latency (DAGEL) model, which provides an abstraction of the processing capacity of the encoder by considering an unbounded number of processors. Using graph theoretic algorithms, the DAGEL model allows us to compute the encoding latency of a given prediction structure, and determine the contribution of the prediction dependencies to it. As an example of DAGEL application, we propose an algorithm to reduce the encoding latency of a given multiview prediction structure up to a target value. In our approach, a minimum number of frame dependencies are pruned, until the latency target value is achieved, thus minimizing the degradation of the rate-distortion performance due to the removal of the prediction dependencies. Finally, we analyze the latency performance of the DAGEL derived prediction structures in multiview encoders with limited processing capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis se ha realizado en el contexto del proyecto UPMSat-2, que es un microsatlite diseado, construido y operado por el Instituto Universitario de Microgravedad "Ignacio Da Riva" (IDR / UPM) de la Universidad Politcnica de Madrid. Aplicacin de la metodologa Ingeniera Concurrente (Concurrent Engineering: CE) en el marco de la aplicacin de diseo multidisciplinar (Multidisciplinary Design Optimization: MDO) es uno de los principales objetivos del presente trabajo. En los ltimos aos, ha habido un inters continuo en la participacin de los grupos de investigacin de las universidades en los estudios de la tecnologa espacial a travs de sus propios microsatlites. La participacin en este tipo de proyectos tiene algunos desafos inherentes, tales como presupuestos y servicios limitados. Adems, debido al hecho de que el objetivo principal de estos proyectos es fundamentalmente educativo, por lo general hay incertidumbres en cuanto a su misin en rbita y cargas tiles en las primeras fases del proyecto. Por otro lado, existen limitaciones predeterminadas para sus presupuestos de masa, volumen y energa, debido al hecho de que la mayora de ellos estn considerados como una carga til auxiliar para el lanzamiento. De este modo, el costo de lanzamiento se reduce considerablemente. En este contexto, el subsistema estructural del satlite es uno de los ms afectados por las restricciones que impone el lanzador. Esto puede afectar a diferentes aspectos, incluyendo las dimensiones, la resistencia y los requisitos de frecuencia. En la primera parte de esta tesis, la atencin se centra en el desarrollo de una herramienta de diseo del subsistema estructural que evala, no slo las propiedades de la estructura primaria como variables, sino tambin algunas variables de nivel de sistema del satlite, como la masa de la carga til y la masa y las dimensiones extremas de satlite. Este enfoque permite que el equipo de diseo obtenga una mejor visin del diseo en un espacio de diseo extendido. La herramienta de diseo estructural se basa en las frmulas y los supuestos apropiados, incluyendo los modelos estticos y dinmicos del satlite. Un algoritmo gentico (Genetic Algorithm: GA) se aplica al espacio de diseo para optimizaciones de objetivo nico y tambin multiobjetivo. El resultado de la optimizacin multiobjetivo es un Pareto-optimal basado en dos objetivo, la masa total de satlites mnimo y el mximo presupuesto de masa de carga til. Por otro lado, la aplicacin de los microsatlites en misiones espaciales es de inters por su menor coste y tiempo de desarrollo. La gran necesidad de las aplicaciones de teledeteccin es un fuerte impulsor de su popularidad en este tipo de misiones espaciales. Las misiones de tele-observacin por satlite son esenciales para la investigacin de los recursos de la tierra y el medio ambiente. En estas misiones existen interrelaciones estrechas entre diferentes requisitos como la altitud orbital, tiempo de revisita, el ciclo de vida y la resolucin. Adems, todos estos requisitos puede afectar a toda las caractersticas de diseo. Durante los ltimos aos la aplicacin de CE en las misiones espaciales ha demostrado una gran ventaja para llegar al diseo ptimo, teniendo en cuenta tanto el rendimiento y el costo del proyecto. Un ejemplo bien conocido de la aplicacin de CE es la CDF (Facilidad Diseo Concurrente) de la ESA (Agencia Espacial Europea). Est claro que para los proyectos de microsatlites universitarios tener o desarrollar una instalacin de este tipo parece estar ms all de las capacidades del proyecto. Sin embargo, la prctica de la CE a cualquier escala puede ser beneficiosa para los microsatlites universitarios tambin. En la segunda parte de esta tesis, la atencin se centra en el desarrollo de una estructura de optimizacin de diseo multidisciplinar (Multidisciplinary Design Optimization: MDO) aplicable a la fase de diseo conceptual de microsatlites de teledeteccin. Este enfoque permite que el equipo de diseo conozca la interaccin entre las diferentes variables de diseo. El esquema MDO presentado no slo incluye variables de nivel de sistema, tales como la masa total del satlite y la potencia total, sino tambin los requisitos de la misin como la resolucin y tiempo de revisita. El proceso de diseo de microsatlites se divide en tres disciplinas; a) diseo de rbita, b) diseo de carga til y c) diseo de plataforma. En primer lugar, se calculan diferentes parmetros de misin para un rango prctico de rbitas helio-sncronas (sun-synchronous orbits: SS-Os). Luego, segn los parmetros orbitales y los datos de un instrumento como referencia, se calcula la masa y la potencia de la carga til. El diseo de la plataforma del satlite se estima a partir de los datos de la masa y potencia de los diferentes subsistemas utilizando relaciones empricas de diseo. El diseo del subsistema de potencia se realiza teniendo en cuenta variables de diseo ms detalladas, como el escenario de la misin y diferentes tipos de clulas solares y bateras. El escenario se selecciona, de modo de obtener una banda de cobertura sobre la superficie terrestre paralelo al Ecuador despus de cada intervalo de revisita. Con el objetivo de evaluar las interrelaciones entre las diferentes variables en el espacio de diseo, todas las disciplinas de diseo mencionados se combinan en un cdigo unificado. Por ltimo, una forma bsica de MDO se ajusta a la herramienta de diseo de sistema de satlite. La optimizacin del diseo se realiza por medio de un GA con el nico objetivo de minimizar la masa total de microsatlite. Segn los resultados obtenidos de la aplicacin del MDO, existen diferentes puntos de diseos ptimos, pero con diferentes variables de misin. Este anlisis demuestra la aplicabilidad de MDO para los estudios de ingeniera de sistema en la fase de diseo conceptual en este tipo de proyectos. La principal conclusin de esta tesis, es que el diseo clsico de los satlites que por lo general comienza con la definicin de la misin y la carga til no es necesariamente la mejor metodologa para todos los proyectos de satlites. Un microsatlite universitario, es un ejemplo de este tipo de proyectos. Por eso, se han desarrollado un conjunto de herramientas de diseo para encarar los estudios de la fase inicial de diseo. Este conjunto de herramientas incluye diferentes disciplinas de diseo centrados en el subsistema estructural y teniendo en cuenta una carga til desconocida a priori. Los resultados demuestran que la mnima masa total del satlite y la mxima masa disponible para una carga til desconocida a priori, son objetivos conflictivos. En este contexto para encontrar un Pareto-optimal se ha aplicado una optimizacin multiobjetivo. Segn los resultados se concluye que la seleccin de la masa total por satlite en el rango de 40-60 kg puede considerarse como ptima para un proyecto de microsatlites universitario con carga til desconocida a priori. Tambin la metodologa CE se ha aplicado al proceso de diseo conceptual de microsatlites de teledeteccin. Los resultados de la aplicacin del CE proporcionan una clara comprensin de la interaccin entre los requisitos de diseo de sistemas de satlites, tales como la masa total del microsatlite y la potencia y los requisitos de la misin como la resolucin y el tiempo de revisita. La aplicacin de MDO se hace con la minimizacin de la masa total de microsatlite. Los resultados de la aplicacin de MDO aclaran la relacin clara entre los diferentes requisitos de diseo del sistema y de misin, as como que permiten seleccionar las lneas de base para el diseo ptimo con el objetivo seleccionado en las primeras fase de diseo. ABSTRACT This thesis is done in the context of UPMSat-2 project, which is a microsatellite under design and manufacturing at the Instituto Universitario de Microgravedad Ignacio Da Riva (IDR/UPM) of the Universidad Politcnica de Madrid. Application of Concurrent Engineering (CE) methodology in the framework of Multidisciplinary Design application (MDO) is one of the main objectives of the present work. In recent years, there has been continuing interest in the participation of university research groups in space technology studies by means of their own microsatellites. The involvement in such projects has some inherent challenges, such as limited budget and facilities. Also, due to the fact that the main objective of these projects is for educational purposes, usually there are uncertainties regarding their in orbit mission and scientific payloads at the early phases of the project. On the other hand, there are predetermined limitations for their mass and volume budgets owing to the fact that most of them are launched as an auxiliary payload in which the launch cost is reduced considerably. The satellite structure subsystem is the one which is most affected by the launcher constraints. This can affect different aspects, including dimensions, strength and frequency requirements. In the first part of this thesis, the main focus is on developing a structural design sizing tool containing not only the primary structures properties as variables but also the satellite system level variables such as payload mass budget and satellite total mass and dimensions. This approach enables the design team to obtain better insight into the design in an extended design envelope. The structural design sizing tool is based on the analytical structural design formulas and appropriate assumptions including both static and dynamic models of the satellite. A Genetic Algorithm (GA) is applied to the design space for both single and multiobejective optimizations. The result of the multiobjective optimization is a Pareto-optimal based on two objectives, minimum satellite total mass and maximum payload mass budget. On the other hand, the application of the microsatellites is of interest for their less cost and response time. The high need for the remote sensing applications is a strong driver of their popularity in space missions. The satellite remote sensing missions are essential for long term research around the condition of the earth resources and environment. In remote sensing missions there are tight interrelations between different requirements such as orbital altitude, revisit time, mission cycle life and spatial resolution. Also, all of these requirements can affect the whole design characteristics. During the last years application of the CE in the space missions has demonstrated a great advantage to reach the optimum design base lines considering both the performance and the cost of the project. A well-known example of CE application is ESA (European Space Agency) CDF (Concurrent Design Facility). It is clear that for the university-class microsatellite projects having or developing such a facility seems beyond the project capabilities. Nevertheless practicing CE at any scale can be beneficiary for the university-class microsatellite projects. In the second part of this thesis, the main focus is on developing a MDO framework applicable to the conceptual design phase of the remote sensing microsatellites. This approach enables the design team to evaluate the interaction between the different system design variables. The presented MDO framework contains not only the system level variables such as the satellite total mass and total power, but also the mission requirements like the spatial resolution and the revisit time. The microsatellite sizing process is divided into the three major design disciplines; a) orbit design, b) payload sizing and c) bus sizing. First, different mission parameters for a practical range of sun-synchronous orbits (SS-Os) are calculated. Then, according to the orbital parameters and a reference remote sensing instrument, mass and power of the payload are calculated. Satellite bus sizing is done based on mass and power calculation of the different subsystems using design estimation relationships. In the satellite bus sizing, the power subsystem design is realized by considering more detailed design variables including a mission scenario and different types of solar cells and batteries. The mission scenario is selected in order to obtain a coverage belt on the earth surface parallel to the earth equatorial after each revisit time. In order to evaluate the interrelations between the different variables inside the design space all the mentioned design disciplines are combined in a unified code. The integrated satellite system sizing tool developed in this section is considered as an application of the CE to the conceptual design of the remote sensing microsatellite projects. Finally, in order to apply the MDO methodology to the design problem, a basic MDO framework is adjusted to the developed satellite system design tool. Design optimization is done by means of a GA single objective algorithm with the objective function as minimizing the microsatellite total mass. According to the results of MDO application, there exist different optimum design points all with the minimum satellite total mass but with different mission variables. This output demonstrates the successful applicability of MDO approach for system engineering trade-off studies at the conceptual design phase of the design in such projects. The main conclusion of this thesis is that the classical design approach for the satellite design which usually starts with the mission and payload definition is not necessarily the best approach for all of the satellite projects. The university-class microsatellite is an example for such projects. Due to this fact an integrated satellite sizing tool including different design disciplines focusing on the structural subsystem and considering unknown payload is developed. According to the results the satellite total mass and available mass for the unknown payload are conflictive objectives. In order to find the Pareto-optimal a multiobjective GA optimization is conducted. Based on the optimization results it is concluded that selecting the satellite total mass in the range of 40-60 kg can be considered as an optimum approach for a university-class microsatellite project with unknown payload(s). Also, the CE methodology is applied to the remote sensing microsatellites conceptual design process. The results of CE application provide a clear understanding of the interaction between satellite system design requirements such as satellite total mass and power and the satellite mission variables such as revisit time and spatial resolution. The MDO application is done with the total mass minimization of a remote sensing satellite. The results from the MDO application clarify the unclear relationship between different system and mission design variables as well as the optimum design base lines according to the selected objective during the initial design phases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, we present a framework based on ant colony optimization (ACO) for tackling combinatorial problems. ACO algorithms have been applied to many diferent problems, focusing on algorithmic variants that obtain high-quality solutions. Usually, the implementations are re-done for various problem even if they maintain the same details of the ACO algorithm. However, our goal is to generate a sustainable framework for applications on permutation problems. We concentrate on understanding the behavior of pheromone trails and specific methods that can be combined. Eventually, we will propose an automatic offline configuration tool to build an efective algorithm. ---RESUMEN---En este trabajo vamos a presentar un framework basado en la familia de algoritmos ant colony optimization (ACO), los cuales estn dise~nados para enfrentarse a problemas combinacionales. Los algoritmos ACO han sido aplicados a diversos problemas, centrndose los investigadores en diversas variantes que obtienen buenas soluciones. Normalmente, las implementaciones se tienen que rehacer, inclusos si se mantienen los mismos detalles para los algoritmos ACO. Sin embargo, nuestro objetivo es generar un framework sostenible para aplicaciones sobre problemas de permutaciones. Nos centraremos en comprender el comportamiento de la sendas de feromonas y ciertos mtodos con los que pueden ser combinados. Finalmente, propondremos una herramienta para la configuraron automtica offline para construir algoritmos eficientes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Markov Chain Monte Carlo methods are widely used in signal processing and communications for statistical inference and stochastic optimization. In this work, we introduce an efficient adaptive Metropolis-Hastings algorithm to draw samples from generic multimodal and multidimensional target distributions. The proposal density is a mixture of Gaussian densities with all parameters (weights, mean vectors and covariance matrices) updated using all the previously generated samples applying simple recursive rules. Numerical results for the one and two-dimensional cases are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-label classification (MLC) is the supervised learning problem where an instance may be associated with multiple labels. Modeling dependencies between labels allows MLC methods to improve their performance at the expense of an increased computational cost. In this paper we focus on the classifier chains (CC) approach for modeling dependencies. On the one hand, the original CC algorithm makes a greedy approximation, and is fast but tends to propagate errors down the chain. On the other hand, a recent Bayes-optimal method improves the performance, but is computationally intractable in practice. Here we present a novel double-Monte Carlo scheme (M2CC), both for finding a good chain sequence and performing efficient inference. The M2CC algorithm remains tractable for high-dimensional data sets and obtains the best overall accuracy, as shown on several real data sets with input dimension as high as 1449 and up to 103 labels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta Tesis aborda los problemas de eficiencia de las redes elctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes elctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reduccin de la infraestructura necesaria para suplir las mismas necesidades energticas. Adems, esta Tesis se enfrenta a un nuevo paradigma energtico, donde la presencia de generacin distribuida est muy extendida en las redes elctricas, en particular, la generacin fotovoltaica (FV). Este tipo de fuente energtica afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetracin de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red elctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energtica. Por lo tanto, no slo se mejora la eficiencia de la red elctrica, sino que tambin puede ser aumentada la penetracin de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos econmicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energtico o un aumento de eficiencia son llamadas Gestin de la Demanda Elctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energtico y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE slo usa informacin local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red elctrica. Aunque esta afirmacin pueda diferir de la definicin general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energticos Distribuidos (REDs). En este caso, la GDE est enfocada en la maximizacin del uso de la energa local, reducindose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestin energtica, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energtico. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, stas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red elctrica cuando el algoritmo de GDE est enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE slo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinacin entre las instalaciones. A travs de esta coordinacin, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto informacin local como de la red elctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red elctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clsicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir rdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestin paralela en lugar de una jerrquica como en las redes elctricas clsicas. Esto implica que se requiere un mecanismo de coordinacin entre instalaciones. Esta Tesis pretende minimizar la cantidad de informacin necesaria para esta coordinacin. Para lograr este objetivo, se han utilizado dos tcnicas de coordinacin colectiva: osciladores acoplados e inteligencia de enjambre. La combinacin de estas tcnicas para llevar a cabo la coordinacin de un sistema con las caractersticas de la red elctrica es en s mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinacin no es slo una contribucin en el campo de la gestin energtica, sino tambin en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre mximos y mnimos de la red elctrica en proporcin a la cantidad de energa controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energa controlada por el algoritmo, mayor es la mejora de eficiencia en la red elctrica. Adems de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solucin distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes caractersticas del algoritmo de GDE propuesto: Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestin de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalacin no afecta el funcionamiento global de la red. Privacidad de datos: el uso de una topologa distribuida causa de que no hay un nodo central con informacin sensible de todos los consumidores. Esta Tesis va ms all y el algoritmo propuesto de GDE no utiliza informacin especfica acerca de los comportamientos de los consumidores, siendo la coordinacin entre las instalaciones completamente annimos. Escalabilidad: el algoritmo propuesto de GDE opera con cualquier nmero de instalaciones. Esto implica que se permite la incorporacin de nuevas instalaciones sin afectar a su funcionamiento. Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topolgicos. Adems, todas las instalaciones calculan su propia gestin con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cmputo. Rpido despliegue: las caractersticas de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementacin rpida. No se requiere una planificacin compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los dispositivos mviles modernos disponen cada vez de ms funcionalidad debido al rpido avance de las tecnologas de las comunicaciones y computaciones mviles. Sin embargo, la capacidad de la batera no ha experimentado un aumento equivalente. Por ello, la experiencia de usuario en los sistemas mviles modernos se ve muy afectada por la vida de la batera, que es un factor inestable de difcil de control. Para abordar este problema, investigaciones anteriores han propuesto un esquema de gestion del consumo (PM) centrada en la energa y que proporciona una garanta sobre la vida operativa de la batera mediante la gestin de la energa como un recurso de primera clase en el sistema. Como el planificador juega un papel fundamental en la administracin del consumo de energa y en la garanta del rendimiento de las aplicaciones, esta tesis explora la optimizacin de la experiencia de usuario para sistemas mviles con energa limitada desde la perspectiva de un planificador que tiene en cuenta el consumo de energa en un contexto en el que sta es un recurso de primera clase. En esta tesis se analiza en primer lugar los factores que contribuyen de forma general a la experiencia de usuario en un sistema mvil. Despus se determinan los requisitos esenciales que afectan a la experiencia de usuario en la planificacin centrada en el consumo de energa, que son el reparto proporcional de la potencia, el cumplimiento de las restricciones temporales, y cuando sea necesario, el compromiso entre la cuota de potencia y las restricciones temporales. Para cumplir con los requisitos, el algoritmo clsico de fair queueing y su modelo de referencia se extienden desde los dominios de las comunicaciones y ancho de banda de CPU hacia el dominio de la energa, y en base a sto, se propone el algoritmo energy-based fair queueing (EFQ) para proporcionar una planificacin basada en la energa. El algoritmo EFQ est diseado para compartir la potencia consumida entre las tareas mediante su planificacin en funcin de la energa consumida y de la cuota reservada. La cuota de consumo de cada tarea con restricciones temporales est protegida frente a diversos cambios que puedan ocurrir en el sistema. Adems, para dar mejor soporte a las tareas en tiempo real y multimedia, se propone un mecanismo para combinar con el algoritmo EFQ para dar preferencia en la planificacin durante breves intervalos de tiempo a las tareas ms urgentes con restricciones temporales.Las propiedades del algoritmo EFQ se evaluan a travs del modelado de alto nivel y la simulacin. Los resultados de las simulaciones indican que los requisitos esenciales de la planificacin centrada en la energa pueden lograrse. El algoritmo EFQ se implementa ms tarde en el kernel de Linux. Para evaluar las propiedades del planificador EFQ basado en Linux, se desarroll un banco de pruebas experimental basado en una sitema empotrado, un programa de banco de pruebas multihilo, y un conjunto de pruebas de cdigo abierto. A travs de experimentos especficamente diseados, esta tesis verifica primero las propiedades de EFQ en la gestin de la cuota de consumo de potencia y la planificacin en tiempo real y, a continuacin, explora los beneficios potenciales de emplear la planificacin EFQ en la optimizacin de la experiencia de usuario para sistemas mviles con energa limitada. Los resultados experimentales sobre la gestin de la cuota de energa muestran que EFQ es ms eficaz que el planificador de Linux-CFS en la gestin de energa, logrando un reparto proporcional de la energa del sistema independientemente de en qu dispositivo se consume la energa. Los resultados experimentales en la planificacin en tiempo real demuestran que EFQ puede lograr de forma eficaz, flexible y robusta el cumplimiento de las restricciones temporales aunque se d el caso de aumento del el nmero de tareas o del error en la estimacin de energa. Por ltimo, un anlisis comparativo de los resultados experimentales sobre la optimizacin de la experiencia del usuario demuestra que, primero, EFQ es ms eficaz y flexible que los algoritmos tradicionales de planificacin del procesador, como el que se encuentra por defecto en el planificador de Linux y, segundo, que proporciona la posibilidad de optimizar y preservar la experiencia de usuario para los sistemas mviles con energa limitada. Abstract Modern mobiledevices have been becoming increasingly powerful in functionality and entertainment as the next-generation mobile computing and communication technologies are rapidly advanced. However, the battery capacity has not experienced anequivalent increase. The user experience of modern mobile systems is therefore greatly affected by the battery lifetime,which is an unstable factor that is hard to control. To address this problem, previous works proposed energy-centric power management (PM) schemes to provide strong guarantee on the battery lifetime by globally managing energy as the first-class resource in the system. As the processor scheduler plays a pivotal role in power management and application performance guarantee, this thesis explores the user experience optimization of energy-limited mobile systemsfrom the perspective of energy-centric processor scheduling in an energy-centric context. This thesis first analyzes the general contributing factors of the mobile system user experience.Then itdetermines the essential requirements on the energy-centric processor scheduling for user experience optimization, which are proportional power sharing, time-constraint compliance, and when necessary, a tradeoff between the power share and the time-constraint compliance. To meet the requirements, the classical fair queuing algorithm and its reference model are extended from the network and CPU bandwidth sharing domain to the energy sharing domain, and based on that, the energy-based fair queuing (EFQ) algorithm is proposed for performing energy-centric processor scheduling. The EFQ algorithm is designed to provide proportional power shares to tasks by scheduling the tasks based on their energy consumption and weights. The power share of each time-sensitive task is protected upon the change of the scheduling environment to guarantee a stable performance, and any instantaneous power share that is overly allocated to one time-sensitive task can be fairly re-allocated to the other tasks. In addition, to better support real-time and multimedia scheduling, certain real-time friendly mechanism is combined into the EFQ algorithm to give time-limited scheduling preference to the time-sensitive tasks. Through high-level modelling and simulation, the properties of the EFQ algorithm are evaluated. The simulation results indicate that the essential requirements of energy-centric processor scheduling can be achieved. The EFQ algorithm is later implemented in the Linux kernel. To assess the properties of the Linux-based EFQ scheduler, an experimental test-bench based on an embedded platform, a multithreading test-bench program, and an open-source benchmark suite is developed. Through specifically-designed experiments, this thesis first verifies the properties of EFQ in power share management and real-time scheduling, and then, explores the potential benefits of employing EFQ scheduling in the user experience optimization for energy-limited mobile systems. Experimental results on power share management show that EFQ is more effective than the Linux-CFS scheduler in managing power shares and it can achieve a proportional sharing of the system power regardless of on which device the energy is spent. Experimental results on real-time scheduling demonstrate that EFQ can achieve effective, flexible and robust time-constraint compliance upon the increase of energy estimation error and task number. Finally, a comparative analysis of the experimental results on user experience optimization demonstrates that EFQ is more effective and flexible than traditional processor scheduling algorithms, such as those of the default Linux scheduler, in optimizing and preserving the user experience of energy-limited mobile systems.