881 resultados para Multi-objective genetic algorithm
Resumo:
This paper describes a Genetic Algorithms approach to a manpower-scheduling problem arising at a major UK hospital. Although Genetic Algorithms have been successfully used for similar problems in the past, they always had to overcome the limitations of the classical Genetic Algorithms paradigm in handling the conflict between objectives and constraints. The approach taken here is to use an indirect coding based on permutations of the nurses, and a heuristic decoder that builds schedules from these permutations. Computational experiments based on 52 weeks of live data are used to evaluate three different decoders with varying levels of intelligence, and four well-known crossover operators. Results are further enhanced by introducing a hybrid crossover operator and by making use of simple bounds to reduce the size of the solution space. The results reveal that the proposed algorithm is able to find high quality solutions and is both faster and more flexible than a recently published Tabu Search approach.
Resumo:
This paper presents a new type of genetic algorithm for the set covering problem. It differs from previous evolutionary approaches first because it is an indirect algorithm, i.e. the actual solutions are found by an external decoder function. The genetic algorithm itself provides this decoder with permutations of the solution variables and other parameters. Second, it will be shown that results can be further improved by adding another indirect optimisation layer. The decoder will not directly seek out low cost solutions but instead aims for good exploitable solutions. These are then post optimised by another hill-climbing algorithm. Although seemingly more complicated, we will show that this three-stage approach has advantages in terms of solution quality, speed and adaptability to new types of problems over more direct approaches. Extensive computational results are presented and compared to the latest evolutionary and other heuristic approaches to the same data instances.
Resumo:
During our earlier research, it was recognised that in order to be successful with an indirect genetic algorithm approach using a decoder, the decoder has to strike a balance between being an optimiser in its own right and finding feasible solutions. Previously this balance was achieved manually. Here we extend this by presenting an automated approach where the genetic algorithm itself, simultaneously to solving the problem, sets weights to balance the components out. Subsequently we were able to solve a complex and non-linear scheduling problem better than with a standard direct genetic algorithm implementation.
Resumo:
There is considerable interest in the use of genetic algorithms to solve problems arising in the areas of scheduling and timetabling. However, the classical genetic algorithm paradigm is not well equipped to handle the conflict between objectives and constraints that typically occurs in such problems. In order to overcome this, successful implementations frequently make use of problem specific knowledge. This paper is concerned with the development of a GA for a nurse rostering problem at a major UK hospital. The structure of the constraints is used as the basis for a co-evolutionary strategy using co-operating sub-populations. Problem specific knowledge is also used to define a system of incentives and disincentives, and a complementary mutation operator. Empirical results based on 52 weeks of live data show how these features are able to improve an unsuccessful canonical GA to the point where it is able to provide a practical solution to the problem.
Resumo:
Robot-control designers have begun to exploit the properties of the human immune system in order to produce dynamic systems that can adapt to complex, varying, real-world tasks. Jerne’s idiotypic-network theory has proved the most popular artificial-immune-system (AIS) method for incorporation into behaviour-based robotics, since idiotypic selection produces highly adaptive responses. However, previous efforts have mostly focused on evolving the network connections and have often worked with a single, preengineered set of behaviours, limiting variability. This paper describes a method for encoding behaviours as a variable set of attributes, and shows that when the encoding is used with a genetic algorithm (GA), multiple sets of diverse behaviours can develop naturally and rapidly, providing much greater scope for flexible behaviour-selection. The algorithm is tested extensively with a simulated e-puck robot that navigates around a maze by tracking colour. Results show that highly successful behaviour sets can be generated within about 25 minutes, and that much greater diversity can be obtained when multiple autonomous populations are used, rather than a single one.
Resumo:
Macroeconomic policy makers are typically concerned with several indicators of economic performance. We thus propose to tackle the design of macroeconomic policy using Multicriteria Decision Making (MCDM) techniques. More specifically, we employ Multiobjective Programming (MP) to seek so-called efficient policies. The MP approach is combined with a computable general equilibrium (CGE) model. We chose use of a CGE model since they have the dual advantage of being consistent with standard economic theory while allowing one to measure the effect(s) of a specific policy with real data. Applying the proposed methodology to Spain (via the 1995 Social Accounting Matrix) we first quantified the trade-offs between two specific policy objectives: growth and inflation, when designing fiscal policy. We then constructed a frontier of efficient policies involving real growth and inflation. In doing so, we found that policy in 1995 Spain displayed some degree of inefficiency with respect to these two policy objectives. We then offer two sets of policy recommendations that, ostensibly, could have helped Spain at the time. The first deals with efficiency independent of the importance given to both growth and inflation by policy makers (we label this set: general policy recommendations). A second set depends on which policy objective is seen as more important by policy makers: increasing growth or controlling inflation (we label this one: objective-specific recommendations).
Resumo:
Technologies for Big Data and Data Science are receiving increasing research interest nowadays. This paper introduces the prototyping architecture of a tool aimed to solve Big Data Optimization problems. Our tool combines the jMetal framework for multi-objective optimization with Apache Spark, a technology that is gaining momentum. In particular, we make use of the streaming facilities of Spark to feed an optimization problem with data from different sources. We demonstrate the use of our tool by solving a dynamic bi-objective instance of the Traveling Salesman Problem (TSP) based on near real-time traffic data from New York City, which is updated several times per minute. Our experiment shows that both jMetal and Spark can be integrated providing a software platform to deal with dynamic multi-optimization problems.
Resumo:
In a previous paper a novel Generalized Multiobjective Multitree model (GMM-model) was proposed. This model considers for the first time multitree-multicast load balancing with splitting in a multiobjective context, whose mathematical solution is a whole Pareto optimal set that can include several results than it has been possible to find in the publications surveyed. To solve the GMM-model, in this paper a multi-objective evolutionary algorithm (MOEA) inspired by the Strength Pareto Evolutionary Algorithm (SPEA) is proposed. Experimental results considering up to 11 different objectives are presented for the well-known NSF network, with two simultaneous data flows
Resumo:
This research focuses on generating aesthetically pleasing images in virtual environments using the particle swarm optimization (PSO) algorithm. The PSO is a stochastic population based search algorithm that is inspired by the flocking behavior of birds. In this research, we implement swarms of cameras flying through a virtual world in search of an image that is aesthetically pleasing. Virtual world exploration using particle swarm optimization is considered to be a new research area and is of interest to both the scientific and artistic communities. Aesthetic rules such as rule of thirds, subject matter, colour similarity and horizon line are all analyzed together as a multi-objective problem to analyze and solve with rendered images. A new multi-objective PSO algorithm, the sum of ranks PSO, is introduced. It is empirically compared to other single-objective and multi-objective swarm algorithms. An advantage of the sum of ranks PSO is that it is useful for solving high-dimensional problems within the context of this research. Throughout many experiments, we show that our approach is capable of automatically producing images satisfying a variety of supplied aesthetic criteria.
Resumo:
Clustering is a difficult task: there is no single cluster definition and the data can have more than one underlying structure. Pareto-based multi-objective genetic algorithms (e.g., MOCK Multi-Objective Clustering with automatic K-determination and MOCLE-Multi-Objective Clustering Ensemble) were proposed to tackle these problems. However, the output of such algorithms can often contains a high number of partitions, becoming difficult for an expert to manually analyze all of them. In order to deal with this problem, we present two selection strategies, which are based on the corrected Rand, to choose a subset of solutions. To test them, they are applied to the set of solutions produced by MOCK and MOCLE in the context of several datasets. The study was also extended to select a reduced set of partitions from the initial population of MOCLE. These analysis show that both versions of selection strategy proposed are very effective. They can significantly reduce the number of solutions and, at the same time, keep the quality and the diversity of the partitions in the original set of solutions. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Traditionally, ancillary services are supplied by large conventional generators. However, with the huge penetration of distributed generators (DGs) as a result of the growing interest in satisfying energy requirements, and considering the benefits that they can bring along to the electrical system and to the environment, it appears reasonable to assume that ancillary services could also be provided by DGs in an economical and efficient way. In this paper, a settlement procedure for a reactive power market for DGs in distribution systems is proposed. Attention is directed to wind turbines connected to the network through synchronous generators with permanent magnets and doubly-fed induction generators. The generation uncertainty of this kind of DG is reduced by running a multi-objective optimization algorithm in multiple probabilistic scenarios through the Monte Carlo method and by representing the active power generated by the DGs through Markov models. The objectives to be minimized are the payments of the distribution system operator to the DGs for reactive power, the curtailment of transactions committed in an active power market previously settled, the losses in the lines of the network, and a voltage profile index. The proposed methodology was tested using a modified IEEE 37-bus distribution test system. © 1969-2012 IEEE.
Resumo:
There are strong uncertainties regarding LAI dynamics in forest ecosystems in response to climate change. While empirical growth & yield models (G&YMs) provide good estimations of tree growth at the stand level on a yearly to decennial scale, process-based models (PBMs) use LAI dynamics as a key variable for enabling the accurate prediction of tree growth over short time scales. Bridging the gap between PBMs and G&YMs could improve the prediction of forest growth and, therefore, carbon, water and nutrient fluxes by combining modeling approaches at the stand level.Our study aimed to estimate monthly changes of leaf area in response to climate variations from sparse measurements of foliage area and biomass. A leaf population probabilistic model (SLCD) was designed to simulate foliage renewal. The leaf population was distributed in monthly cohorts, and the total population size was limited depending on forest age and productivity. Foliage dynamics were driven by a foliation function and the probabilities ruling leaf aging or fall. Their formulation depends on the forest environment.The model was applied to three tree species growing under contrasting climates and soil types. In tropical Brazilian evergreen broadleaf eucalypt plantations, the phenology was described using 8 parameters. A multi-objective evolutionary algorithm method (MOEA) was used to fit the model parameters on litterfall and LAI data over an entire stand rotation. Field measurements from a second eucalypt stand were used to validate the model. Seasonal LAI changes were accurately rendered for both sites (R-2 = 0.898 adjustment, R-2 = 0.698 validation). Litterfall production was correctly simulated (R-2 = 0.562, R-2 = 0.4018 validation) and may be improved by using additional validation data in future work. In two French temperate deciduous forests (beech and oak), we adapted phenological sub-modules of the CASTANEA model to simulate canopy dynamics, and SLCD was validated using LAI measurements. The phenological patterns were simulated with good accuracy in the two cases studied. However, IA/max was not accurately simulated in the beech forest, and further improvement is required.Our probabilistic approach is expected to contribute to improving predictions of LAI dynamics. The model formalism is general and suitable to broadleaf forests for a large range of ecological conditions. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
The aim of this Doctoral Thesis is to develop a genetic algorithm based optimization methods to find the best conceptual design architecture of an aero-piston-engine, for given design specifications. Nowadays, the conceptual design of turbine airplanes starts with the aircraft specifications, then the most suited turbofan or turbo propeller for the specific application is chosen. In the aeronautical piston engines field, which has been dormant for several decades, as interest shifted towards turboaircraft, new materials with increased performance and properties have opened new possibilities for development. Moreover, the engine’s modularity given by the cylinder unit, makes it possible to design a specific engine for a given application. In many real engineering problems the amount of design variables may be very high, characterized by several non-linearities needed to describe the behaviour of the phenomena. In this case the objective function has many local extremes, but the designer is usually interested in the global one. The stochastic and the evolutionary optimization techniques, such as the genetic algorithms method, may offer reliable solutions to the design problems, within acceptable computational time. The optimization algorithm developed here can be employed in the first phase of the preliminary project of an aeronautical piston engine design. It’s a mono-objective genetic algorithm, which, starting from the given design specifications, finds the engine propulsive system configuration which possesses minimum mass while satisfying the geometrical, structural and performance constraints. The algorithm reads the project specifications as input data, namely the maximum values of crankshaft and propeller shaft speed and the maximal pressure value in the combustion chamber. The design variables bounds, that describe the solution domain from the geometrical point of view, are introduced too. In the Matlab® Optimization environment the objective function to be minimized is defined as the sum of the masses of the engine propulsive components. Each individual that is generated by the genetic algorithm is the assembly of the flywheel, the vibration damper and so many pistons, connecting rods, cranks, as the number of the cylinders. The fitness is evaluated for each individual of the population, then the rules of the genetic operators are applied, such as reproduction, mutation, selection, crossover. In the reproduction step the elitist method is applied, in order to save the fittest individuals from a contingent mutation and recombination disruption, making it undamaged survive until the next generation. Finally, as the best individual is found, the optimal dimensions values of the components are saved to an Excel® file, in order to build a CAD-automatic-3D-model for each component of the propulsive system, having a direct pre-visualization of the final product, still in the engine’s preliminary project design phase. With the purpose of showing the performance of the algorithm and validating this optimization method, an actual engine is taken, as a case study: it’s the 1900 JTD Fiat Avio, 4 cylinders, 4T, Diesel. Many verifications are made on the mechanical components of the engine, in order to test their feasibility and to decide their survival through generations. A system of inequalities is used to describe the non-linear relations between the design variables, and is used for components checking for static and dynamic loads configurations. The design variables geometrical boundaries are taken from actual engines data and similar design cases. Among the many simulations run for algorithm testing, twelve of them have been chosen as representative of the distribution of the individuals. Then, as an example, for each simulation, the corresponding 3D models of the crankshaft and the connecting rod, have been automatically built. In spite of morphological differences among the component the mass is almost the same. The results show a significant mass reduction (almost 20% for the crankshaft) in comparison to the original configuration, and an acceptable robustness of the method have been shown. The algorithm here developed is shown to be a valid method for an aeronautical-piston-engine preliminary project design optimization. In particular the procedure is able to analyze quite a wide range of design solutions, rejecting the ones that cannot fulfill the feasibility design specifications. This optimization algorithm could increase the aeronautical-piston-engine development, speeding up the production rate and joining modern computation performances and technological awareness to the long lasting traditional design experiences.
Resumo:
Wireless sensor networks (WSNs) have shown their potentials in various applications, which bring a lot of benefits to users from both research and industrial areas. For many setups, it is envisioned thatWSNs will consist of tens to hundreds of nodes that operate on small batteries. However due to the diversity of the deployed environments and resource constraints on radio communication, sensing ability and energy supply, it is a very challenging issue to plan optimized WSN topology and predict its performance before real deployment. During the network planning phase, the connectivity, coverage, cost, network longevity and service quality should all be considered. Therefore it requires designers coping with comprehensive and interdisciplinary knowledge, including networking, radio engineering, embedded system and so on, in order to efficiently construct a reliable WSN for any specific types of environment. Nowadays there is still a lack of the analysis and experiences to guide WSN designers to efficiently construct WSN topology successfully without many trials. Therefore, simulation is a feasible approach to the quantitative analysis of the performance of wireless sensor networks. However the existing planning algorithms and tools, to some extent, have serious limitations to practically design reliable WSN topology: Only a few of them tackle the 3D deployment issue, and an overwhelming number of works are proposed to place devices in 2D scheme. Without considering the full dimension, the impacts of environment to the performance of WSN are not completely studied, thus the values of evaluated metrics such as connectivity and sensing coverage are not sufficiently accurate to make proper decision. Even fewer planning methods model the sensing coverage and radio propagation by considering the realistic scenario where obstacles exist. Radio signals propagate with multi-path phenomenon in the real world, in which direct paths, reflected paths and diffracted paths contribute to the received signal strength. Besides, obstacles between the path of sensor and objects might block the sensing signals, thus create coverage hole in the application. None of the existing planning algorithms model the network longevity and packet delivery capability properly and practically. They often employ unilateral and unrealistic formulations. The optimization targets are often one-sided in the current works. Without comprehensive evaluation on the important metrics, the performance of planned WSNs can not be reliable and entirely optimized. Modeling of environment is usually time consuming and the cost is very high, while none of the current works figure out any method to model the 3D deployment environment efficiently and accurately. Therefore many researchers are trapped by this issue, and their algorithms can only be evaluated in the same scenario, without the possibility to test the robustness and feasibility for implementations in different environments. In this thesis, we propose a novel planning methodology and an intelligent WSN planning tool to assist WSN designers efficiently planning reliable WSNs. First of all, a new method is proposed to efficiently and automatically model the 3D indoor and outdoor environments. To the best of our knowledge, this is the first time that the advantages of image understanding algorithm are applied to automatically reconstruct 3D outdoor and indoor scenarios for signal propagation and network planning purpose. The experimental results indicate that the proposed methodology is able to accurately recognize different objects from the satellite images of the outdoor target regions and from the scanned floor plan of indoor area. Its mechanism offers users a flexibility to reconstruct different types of environment without any human interaction. Thereby it significantly reduces human efforts, cost and time spent on reconstructing a 3D geographic database and allows WSN designers concentrating on the planning issues. Secondly, an efficient ray-tracing engine is developed to accurately and practically model the radio propagation and sensing signal on the constructed 3D map. The engine contributes on efficiency and accuracy to the estimated results. By using image processing concepts, including the kd-tree space division algorithm and modified polar sweep algorithm, the rays are traced efficiently without detecting all the primitives in the scene. The radio propagation model iv is proposed, which emphasizes not only the materials of obstacles but also their locations along the signal path. The sensing signal of sensor nodes, which is sensitive to the obstacles, is benefit from the ray-tracing algorithm via obstacle detection. The performance of this modelling method is robust and accurate compared with conventional methods, and experimental results imply that this methodology is suitable for both outdoor urban scenes and indoor environments. Moreover, it can be applied to either GSM communication or ZigBee protocol by varying frequency parameter of the radio propagation model. Thirdly, WSN planning method is proposed to tackle the above mentioned challenges and efficiently deploy reliable WSNs. More metrics (connectivity, coverage, cost, lifetime, packet latency and packet drop rate) are modeled more practically compared with other works. Especially 3D ray tracing method is used to model the radio link and sensing signal which are sensitive to the obstruction of obstacles; network routing is constructed by using AODV protocol; the network longevity, packet delay and packet drop rate are obtained via simulating practical events in WSNet simulator, which to the best of our knowledge, is the first time that network simulator is involved in a planning algorithm. Moreover, a multi-objective optimization algorithm is developed to cater for the characteristics of WSNs. The capability of providing multiple optimized solutions simultaneously allows users making their own decisions accordingly, and the results are more comprehensively optimized compared with other state-of-the-art algorithms. iMOST is developed by integrating the introduced algorithms, to assist WSN designers efficiently planning reliable WSNs for different configurations. The abbreviated name iMOST stands for an Intelligent Multi-objective Optimization Sensor network planning Tool. iMOST contributes on: (1) Convenient operation with a user-friendly vision system; (2) Efficient and automatic 3D database reconstruction and fast 3D objects design for both indoor and outdoor environments; (3) It provides multiple multi-objective optimized 3D deployment solutions and allows users to configure the network properties, hence it can adapt to various WSN applications; (4) Deployment solutions in the 3D space and the corresponding evaluated performance are visually presented to users; and (5) The Node Placement Module of iMOST is available online as well as the source code of the other two rebuilt heuristics. Therefore WSN designers will be benefit from v this tool on efficiently constructing environment database, practically and efficiently planning reliable WSNs for both outdoor and indoor applications. With the open source codes, they are also able to compare their developed algorithms with ours to contribute to this academic field. Finally, solid real results are obtained for both indoor and outdoor WSN planning. Deployments have been realized for both indoor and outdoor environments based on the provided planning solutions. The measured results coincide well with the estimated results. The proposed planning algorithm is adaptable according to the WSN designer’s desirability and configuration, and it offers flexibility to plan small and large scale, indoor and outdoor 3D deployments. The thesis is organized in 7 chapters. In Chapter 1, WSN applications and motivations of this work are introduced, the state-of-the-art planning algorithms and tools are reviewed, challenges are stated out and the proposed methodology is briefly introduced. In Chapter 2, the proposed 3D environment reconstruction methodology is introduced and its performance is evaluated for both outdoor and indoor environment. The developed ray-tracing engine and proposed radio propagation modelling method are described in details in Chapter 3, their performances are evaluated in terms of computation efficiency and accuracy. Chapter 4 presents the modelling of important metrics of WSNs and the proposed multi-objective optimization planning algorithm, the performance is compared with the other state-of-the-art planning algorithms. The intelligent WSN planning tool iMOST is described in Chapter 5. RealWSN deployments are prosecuted based on the planned solutions for both indoor and outdoor scenarios, important data are measured and results are analysed in Chapter 6. Chapter 7 concludes the thesis and discusses about future works. vi Resumen en Castellano Las redes de sensores inalámbricas (en inglés Wireless Sensor Networks, WSNs) han demostrado su potencial en diversas aplicaciones que aportan una gran cantidad de beneficios para el campo de la investigación y de la industria. Para muchas configuraciones se prevé que las WSNs consistirán en decenas o cientos de nodos que funcionarán con baterías pequeñas. Sin embargo, debido a la diversidad de los ambientes para desplegar las redes y a las limitaciones de recursos en materia de comunicación de radio, capacidad de detección y suministro de energía, la planificación de la topología de la red y la predicción de su rendimiento es un tema muy difícil de tratar antes de la implementación real. Durante la fase de planificación del despliegue de la red se deben considerar aspectos como la conectividad, la cobertura, el coste, la longevidad de la red y la calidad del servicio. Por lo tanto, requiere de diseñadores con un amplio e interdisciplinario nivel de conocimiento que incluye la creación de redes, la ingeniería de radio y los sistemas embebidos entre otros, con el fin de construir de manera eficiente una WSN confiable para cualquier tipo de entorno. Hoy en día todavía hay una falta de análisis y experiencias que orienten a los diseñadores de WSN para construir las topologías WSN de manera eficiente sin realizar muchas pruebas. Por lo tanto, la simulación es un enfoque viable para el análisis cuantitativo del rendimiento de las redes de sensores inalámbricos. Sin embargo, los algoritmos y herramientas de planificación existentes tienen, en cierta medida, serias limitaciones para diseñar en la práctica una topología fiable de WSN: Sólo unos pocos abordan la cuestión del despliegue 3D mientras que existe una gran cantidad de trabajos que colocan los dispositivos en 2D. Si no se analiza la dimensión completa (3D), los efectos del entorno en el desempeño de WSN no se estudian por completo, por lo que los valores de los parámetros evaluados, como la conectividad y la cobertura de detección, no son lo suficientemente precisos para tomar la decisión correcta. Aún en menor medida los métodos de planificación modelan la cobertura de los sensores y la propagación de la señal de radio teniendo en cuenta un escenario realista donde existan obstáculos. Las señales de radio en el mundo real siguen una propagación multicamino, en la que los caminos directos, los caminos reflejados y los caminos difractados contribuyen a la intensidad de la señal recibida. Además, los obstáculos entre el recorrido del sensor y los objetos pueden bloquear las señales de detección y por lo tanto crear áreas sin cobertura en la aplicación. Ninguno de los algoritmos de planificación existentes modelan el tiempo de vida de la red y la capacidad de entrega de paquetes correctamente y prácticamente. A menudo se emplean formulaciones unilaterales y poco realistas. Los objetivos de optimización son a menudo tratados unilateralmente en los trabajos actuales. Sin una evaluación exhaustiva de los parámetros importantes, el rendimiento previsto de las redes inalámbricas de sensores no puede ser fiable y totalmente optimizado. Por lo general, el modelado del entorno conlleva mucho tiempo y tiene un coste muy alto, pero ninguno de los trabajos actuales propone algún método para modelar el entorno de despliegue 3D con eficiencia y precisión. Por lo tanto, muchos investigadores están limitados por este problema y sus algoritmos sólo se pueden evaluar en el mismo escenario, sin la posibilidad de probar la solidez y viabilidad para las implementaciones en diferentes entornos. En esta tesis, se propone una nueva metodología de planificación así como una herramienta inteligente de planificación de redes de sensores inalámbricas para ayudar a los diseñadores a planificar WSNs fiables de una manera eficiente. En primer lugar, se propone un nuevo método para modelar demanera eficiente y automática los ambientes interiores y exteriores en 3D. Según nuestros conocimientos hasta la fecha, esta es la primera vez que las ventajas del algoritmo de _image understanding_se aplican para reconstruir automáticamente los escenarios exteriores e interiores en 3D para analizar la propagación de la señal y viii la planificación de la red. Los resultados experimentales indican que la metodología propuesta es capaz de reconocer con precisión los diferentes objetos presentes en las imágenes satelitales de las regiones objetivo en el exterior y de la planta escaneada en el interior. Su mecanismo ofrece a los usuarios la flexibilidad para reconstruir los diferentes tipos de entornos sin ninguna interacción humana. De este modo se reduce considerablemente el esfuerzo humano, el coste y el tiempo invertido en la reconstrucción de una base de datos geográfica con información 3D, permitiendo así que los diseñadores se concentren en los temas de planificación. En segundo lugar, se ha desarrollado un motor de trazado de rayos (en inglés ray tracing) eficiente para modelar con precisión la propagación de la señal de radio y la señal de los sensores en el mapa 3D construido. El motor contribuye a la eficiencia y la precisión de los resultados estimados. Mediante el uso de los conceptos de procesamiento de imágenes, incluyendo el algoritmo del árbol kd para la división del espacio y el algoritmo _polar sweep_modificado, los rayos se trazan de manera eficiente sin la detección de todas las primitivas en la escena. El modelo de propagación de radio que se propone no sólo considera los materiales de los obstáculos, sino también su ubicación a lo largo de la ruta de señal. La señal de los sensores de los nodos, que es sensible a los obstáculos, se ve beneficiada por la detección de objetos llevada a cabo por el algoritmo de trazado de rayos. El rendimiento de este método de modelado es robusto y preciso en comparación con los métodos convencionales, y los resultados experimentales indican que esta metodología es adecuada tanto para escenas urbanas al aire libre como para ambientes interiores. Por otra parte, se puede aplicar a cualquier comunicación GSM o protocolo ZigBee mediante la variación de la frecuencia del modelo de propagación de radio. En tercer lugar, se propone un método de planificación de WSNs para hacer frente a los desafíos mencionados anteriormente y desplegar redes de sensores fiables de manera eficiente. Se modelan más parámetros (conectividad, cobertura, coste, tiempo de vida, la latencia de paquetes y tasa de caída de paquetes) en comparación con otros trabajos. Especialmente el método de trazado de rayos 3D se utiliza para modelar el enlace de radio y señal de los sensores que son sensibles a la obstrucción de obstáculos; el enrutamiento de la red se construye utilizando el protocolo AODV; la longevidad de la red, retardo de paquetes ix y tasa de abandono de paquetes se obtienen a través de la simulación de eventos prácticos en el simulador WSNet, y según nuestros conocimientos hasta la fecha, es la primera vez que simulador de red está implicado en un algoritmo de planificación. Por otra parte, se ha desarrollado un algoritmo de optimización multi-objetivo para satisfacer las características de las redes inalámbricas de sensores. La capacidad de proporcionar múltiples soluciones optimizadas de forma simultánea permite a los usuarios tomar sus propias decisiones en consecuencia, obteniendo mejores resultados en comparación con otros algoritmos del estado del arte. iMOST se desarrolla mediante la integración de los algoritmos presentados, para ayudar de forma eficiente a los diseñadores en la planificación de WSNs fiables para diferentes configuraciones. El nombre abreviado iMOST (Intelligent Multi-objective Optimization Sensor network planning Tool) representa una herramienta inteligente de planificación de redes de sensores con optimización multi-objetivo. iMOST contribuye en: (1) Operación conveniente con una interfaz de fácil uso, (2) Reconstrucción eficiente y automática de una base de datos con información 3D y diseño rápido de objetos 3D para ambientes interiores y exteriores, (3) Proporciona varias soluciones de despliegue optimizadas para los multi-objetivo en 3D y permite a los usuarios configurar las propiedades de red, por lo que puede adaptarse a diversas aplicaciones de WSN, (4) las soluciones de implementación en el espacio 3D y el correspondiente rendimiento evaluado se presentan visualmente a los usuarios, y (5) El _Node Placement Module_de iMOST está disponible en línea, así como el código fuente de las otras dos heurísticas de planificación. Por lo tanto los diseñadores WSN se beneficiarán de esta herramienta para la construcción eficiente de la base de datos con información del entorno, la planificación práctica y eficiente de WSNs fiables tanto para aplicaciones interiores y exteriores. Con los códigos fuente abiertos, son capaces de comparar sus algoritmos desarrollados con los nuestros para contribuir a este campo académico. Por último, se obtienen resultados reales sólidos tanto para la planificación de WSN en interiores y exteriores. Los despliegues se han realizado tanto para ambientes de interior y como para ambientes de exterior utilizando las soluciones de planificación propuestas. Los resultados medidos coinciden en gran medida con los resultados estimados. El algoritmo de planificación x propuesto se adapta convenientemente al deiseño de redes de sensores inalámbricas, y ofrece flexibilidad para planificar los despliegues 3D a pequeña y gran escala tanto en interiores como en exteriores. La tesis se estructura en 7 capítulos. En el Capítulo 1, se presentan las aplicaciones de WSN y motivaciones de este trabajo, se revisan los algoritmos y herramientas de planificación del estado del arte, se presentan los retos y se describe brevemente la metodología propuesta. En el Capítulo 2, se presenta la metodología de reconstrucción de entornos 3D propuesta y su rendimiento es evaluado tanto para espacios exteriores como para espacios interiores. El motor de trazado de rayos desarrollado y el método de modelado de propagación de radio propuesto se describen en detalle en el Capítulo 3, evaluándose en términos de eficiencia computacional y precisión. En el Capítulo 4 se presenta el modelado de los parámetros importantes de las WSNs y el algoritmo de planificación de optimización multi-objetivo propuesto, el rendimiento se compara con los otros algoritmos de planificación descritos en el estado del arte. La herramienta inteligente de planificación de redes de sensores inalámbricas, iMOST, se describe en el Capítulo 5. En el Capítulo 6 se llevan a cabo despliegues reales de acuerdo a las soluciones previstas para los escenarios interiores y exteriores, se miden los datos importantes y se analizan los resultados. En el Capítulo 7 se concluye la tesis y se discute acerca de los trabajos futuros.
A simplified spectral approachfor impedance-based damage identification of frp-strengthened rc beams
Resumo:
Hoy en día, el refuerzo y reparación de estructuras de hormigón armado mediante el pegado de bandas de polímeros reforzados con fibras (FRP) se emplea cada vez con más frecuencia a causa de sus numerosas ventajas. Sin embargo, las vigas reforzadas con esta técnica pueden experimentar un modo de fallo frágil a causa del despegue repentino de la banda de FRP a partir de una fisura intermedia. A pesar de su importancia, el número de trabajos que abordan el estudio de este mecanismo de fallo y su monitorización es muy limitado. Por ello, el desarrollo de metodologías capaces de monitorizar a largo plazo la adherencia de este refuerzo a las estructuras de hormigón e identificar cuándo se inicia el despegue de la banda constituyen un importante desafío a abordar. El principal objetivo de esta tesis es la implementación de una metodología fiable y efectiva, capaz de detectar el despegue de una banda de FRP en una viga de hormigón armado a partir de una fisura intermedia. Para alcanzar este objetivo se ha implementado un procedimiento de calibración numérica a partir de ensayos experimentales. Para ello, en primer lugar, se ha desarrollado un modelo numérico unidimensional simple y no costoso representativo del comportamiento de este tipo vigas de hormigón reforzadas con FRP, basado en un modelo de fisura discreta para el hormigón y el método de elementos espectrales. La formación progresiva de fisuras a flexion y el consiguiente despegue en la interface entre el hormigón y el FRP se formulan mediante la introducción de un nuevo elemento capaz de representar ambos fenómenos simultáneamente sin afectar al procedimiento numérico. Además, con el modelo propuesto, se puede obtener de una forma sencilla la respuesta dinámica en altas frecuencias de este tipo de estructuras, lo cual puede hacer muy útil su uso como herramienta de diagnosis y detección del despegue en su fase inicial mediante una monitorización de la variación de las características dinámicas locales de la estructura. Un método de evaluación no destructivo muy prometedor para la monitorización local de las estructuras es el método de la impedancia usando sensores-actuadores piezoeléctricos (PZT). La impedancia eléctrica de los sensores PZT se puede relacionar con la impedancia mecánica de las estructuras donde se encuentran adheridos Ya que la impedancia mecánica de una estructura se verá afectada por su deterioro, se pueden implementar indicadores de daño mediante una comparación del espectro de admitancia (inversa de la impedancia) a lo largo de distintas etapas durante el periodo de servicio de una estructura. Cualquier cambio en el espectro se podría interpretar como una variación en la integridad de la estructura. La impedancia eléctrica se mide a altas frecuencias con lo cual esta metodología debería ser muy sensible a la detección de estados de daño incipiente local, tal como se desea en la aplicación de este trabajo. Se ha implementado un elemento espectral PZT-FRP como extensión del modelo previamente desarrollado, con el objetivo de poder calcular numéricamente la impedancia eléctrica de sensores PZT adheridos a bandas de FRP sobre una viga de hormigón armado. El modelo, combinado con medidas experimentales captadas mediante sensores PZT, se implementa en el marco de una metodología de calibración de modelos para detectar cuantitativamente el despegue en la interfase entre una banda de FRP y una viga de hormigón. El procedimiento de optimización se resuelve empleando el método del enjambre cooperativo con un algoritmo bagging. Los resultados muestran una gran aproximación en la estimación del daño para el problema propuesto. Adicionalmente, se ha desarrollado también un método adaptativo para el mallado de elementos espectrales con el objetivo de localizar las zonas dañadas a partir de los resultados experimentales, el cual contribuye a aumentar la robustez y efectividad del método propuesto a la hora de identificar daños incipientes en su aparición inicial. Finalmente, se ha llevado a cabo un procedimiento de optimización multi-objetivo para detectar el despegue inicial en una viga de hormigón a escala real reforzada con FRP a partir de las impedancias captadas con una red de sensores PZT instrumentada a lo largo de la longitud de la viga. Cada sensor aporta los datos para definir cada una de las funciones objetivo que definen el procedimiento. Combinando el modelo previo de elementos espectrales con un algoritmo PSO multi-objetivo el procedimiento de detección de daño resultante proporciona resultados satisfactorios considerando la escala de la estructura y todas las incertidumbres características ligadas a este proceso. Los resultados obtenidos prueban la viabilidad y capacidad de los métodos antes mencionados y también su potencial en aplicaciones reales. Abstract Nowadays, the external bonding of fibre reinforced polymer (FRP) plates or sheets is increasingly used for the strengthening and retrofitting of reinforced concrete (RC) structures due to its numerous advantages. However, this kind of strengthening often leads to brittle failure modes being the most dominant failure mode the debonding induced by an intermediate crack (IC). In spite of its importance, the number of studies regarding the IC debonding mechanism and bond health monitoring is very limited. Methodologies able to monitor the long-term efficiency of bonding and successfully identify the initiation of FRP debonding constitute a challenge to be met. The main purpose of this thesisis the implementation of a reliable and effective methodology of damage identification able to detect intermediate crack debonding in FRP-strengthened RC beams. To achieve this goal, a model updating procedure based on numerical simulations and experimental tests has been implemented. For it, firstly, a simple and non-expensive one-dimensional model based on the discrete crack approach for concrete and the spectral element method has been developed. The progressive formation of flexural cracks and subsequent concrete-FRP interfacial debonding is formulated by the introduction of a new element able to represent both phenomena simultaneously without perturbing the numerical procedure. Furthermore, with the proposed model, high frequency dynamic response for these kinds of structures can also be obtained in a very simple and non-expensive way, which makes this procedure very useful as a tool for diagnoses and detection of debonding in its initial stage by monitoring the change in local dynamic characteristics. One very promising active non-destructive evaluation method for local monitoring is impedance-based structural health monitoring(SHM)using piezoelectric ceramic (PZT) sensor-actuators. The electrical impedance of the PZT can be directly related to the mechanical impedance of the host structural component where the PZT transducers are attached. Since the structural mechanical impedance will be affected by the presence of structural damage, comparisons of admittance (inverse of impedance) spectra at various times during the service period of the structure can be used as damage indicator. Any change in the spectra might be an indication of a change in the structural integrity. The electrical impedance is measured at high frequencies with which this methodology appears to be very sensitive to incipient damage in structural systems as desired for our application. Abonded-PZT-FRP spectral beam element approach based on an extension of the previous discrete crack approach is implemented in the calculation of the electrical impedance of the PZT transducer bonded to the FRP plates of a RC beam. This approach in conjunction with the experimental measurements of PZT actuator-sensors mounted on the structure is used to present an updating methodology to quantitatively detect interfacial debonding between a FRP strip and the host RC structure. The updating procedure is solved by using an ensemble particle swarm optimization approach with abagging algorithm, and the results demonstrate a big improvement for the performance and accuracy of the damage detection in the proposed problem. Additionally, an adaptive strategy of spectral element mesh has been also developed to detect damage location with experimental results, which shows the robustness and effectiveness of the proposed method to identify initial and incipient damages at its early stage. Lastly, multi-objective optimization has been carried out to detect debonding damage in a real scale FRP-strengthened RC beam by using impedance signatures. A net of PZT sensors is distributed along the beam to construct impedance-based multiple objectives under gradually induced damage scenario. By combining the spectral element model presented previously and an ensemble multi-objective PSO algorithm, the implemented damage detection process yields satisfactory predictions considering the scale and uncertainties of the structure. The obtained results prove the feasibility and capability of the aforementioned methods and also their potentials in real engineering applications.