48 resultados para many-objective problems
em Universidad Politécnica de Madrid
Resumo:
This paper proposes a new multi-objective estimation of distribution algorithm (EDA) based on joint modeling of objectives and variables. This EDA uses the multi-dimensional Bayesian network as its probabilistic model. In this way it can capture the dependencies between objectives, variables and objectives, as well as the dependencies learnt between variables in other Bayesian network-based EDAs. This model leads to a problem decomposition that helps the proposed algorithm to find better trade-off solutions to the multi-objective problem. In addition to Pareto set approximation, the algorithm is also able to estimate the structure of the multi-objective problem. To apply the algorithm to many-objective problems, the algorithm includes four different ranking methods proposed in the literature for this purpose. The algorithm is applied to the set of walking fish group (WFG) problems, and its optimization performance is compared with an evolutionary algorithm and another multi-objective EDA. The experimental results show that the proposed algorithm performs significantly better on many of the problems and for different objective space dimensions, and achieves comparable results on some compared with the other algorithms.
Resumo:
Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.
Resumo:
In this study, we present a framework based on ant colony optimization (ACO) for tackling combinatorial problems. ACO algorithms have been applied to many diferent problems, focusing on algorithmic variants that obtain high-quality solutions. Usually, the implementations are re-done for various problem even if they maintain the same details of the ACO algorithm. However, our goal is to generate a sustainable framework for applications on permutation problems. We concentrate on understanding the behavior of pheromone trails and specific methods that can be combined. Eventually, we will propose an automatic offline configuration tool to build an efective algorithm. ---RESUMEN---En este trabajo vamos a presentar un framework basado en la familia de algoritmos ant colony optimization (ACO), los cuales están dise~nados para enfrentarse a problemas combinacionales. Los algoritmos ACO han sido aplicados a diversos problemas, centrándose los investigadores en diversas variantes que obtienen buenas soluciones. Normalmente, las implementaciones se tienen que rehacer, inclusos si se mantienen los mismos detalles para los algoritmos ACO. Sin embargo, nuestro objetivo es generar un framework sostenible para aplicaciones sobre problemas de permutaciones. Nos centraremos en comprender el comportamiento de la sendas de feromonas y ciertos métodos con los que pueden ser combinados. Finalmente, propondremos una herramienta para la configuraron automática offline para construir algoritmos eficientes.
Resumo:
As one of the most competitive approaches to multi-objective optimization, evolutionary algorithms have been shown to obtain very good results for many realworld multi-objective problems. One of the issues that can affect the performance of these algorithms is the uncertainty in the quality of the solutions which is usually represented with the noise in the objective values. Therefore, handling noisy objectives in evolutionary multi-objective optimization algorithms becomes very important and is gaining more attention in recent years. In this paper we present ?-degree Pareto dominance relation for ordering the solutions in multi-objective optimization when the values of the objective functions are given as intervals. Based on this dominance relation, we propose an adaptation of the non-dominated sorting algorithm for ranking the solutions. This ranking method is then used in a standardmulti-objective evolutionary algorithm and a recently proposed novel multi-objective estimation of distribution algorithm based on joint variable-objective probabilistic modeling, and applied to a set of multi-objective problems with different levels of independent noise. The experimental results show that the use of the proposed method for solution ranking allows to approximate Pareto sets which are considerably better than those obtained when using the dominance probability-based ranking method, which is one of the main methods for noise handling in multi-objective optimization.
Resumo:
It is known that the techniques under the topic of Soft Computing have a strong capability of learning and cognition, as well as a good tolerance to uncertainty and imprecision. Due to these properties they can be applied successfully to Intelligent Vehicle Systems; ITS is a broad range of technologies and techniques that hold answers to many transportation problems. The unmannedcontrol of the steering wheel of a vehicle is one of the most important challenges facing researchers in this area. This paper presents a method to adjust automatically a fuzzy controller to manage the steering wheel of a mass-produced vehicle; to reach it, information about the car state while a human driver is handling the car is taken and used to adjust, via iterative geneticalgorithms an appropriated fuzzy controller. To evaluate the obtained controllers, it will be considered the performance obtained in the track following task, as well as the smoothness of the driving carried out.
Resumo:
The boundary element method (BEM) has been applied successfully to many engineering problems during the last decades. Compared with domain type methods like the finite element method (FEM) or the finite difference method (FDM) the BEM can handle problems where the medium extends to infinity much easier than domain type methods as there is no need to develop special boundary conditions (quiet or absorbing boundaries) or infinite elements at the boundaries introduced to limit the domain studied. The determination of the dynamic stiffness of arbitrarily shaped footings is just one of these fields where the BEM has been the method of choice, especially in the 1980s. With the continuous development of computer technology and the available hardware equipment the size of the problems under study grew and, as the flop count for solving the resulting linear system of equations grows with the third power of the number of equations, there was a need for the development of iterative methods with better performance. In [1] the GMRES algorithm was presented which is now widely used for implementations of the collocation BEM. While the FEM results in sparsely populated coefficient matrices, the BEM leads, in general, to fully or densely populated ones, depending on the number of subregions, posing a serious memory problem even for todays computers. If the geometry of the problem permits the surface of the domain to be meshed with equally shaped elements a lot of the resulting coefficients will be calculated and stored repeatedly. The present paper shows how these unnecessary operations can be avoided reducing the calculation time as well as the storage requirement. To this end a similar coefficient identification algorithm (SCIA), has been developed and implemented in a program written in Fortran 90. The vertical dynamic stiffness of a single pile in layered soil has been chosen to test the performance of the implementation. The results obtained with the 3-d model may be compared with those obtained with an axisymmetric formulation which are considered to be the reference values as the mesh quality is much better. The entire 3D model comprises more than 35000 dofs being a soil region with 21168 dofs the biggest single region. Note that the memory necessary to store all coefficients of this single region is about 6.8 GB, an amount which is usually not available with personal computers. In the problem under study the interface zone between the two adjacent soil regions as well as the surface of the top layer may be meshed with equally sized elements. In this case the application of the SCIA leads to an important reduction in memory requirements. The maximum memory used during the calculation has been reduced to 1.2 GB. The application of the SCIA thus permits problems to be solved on personal computers which otherwise would require much more powerful hardware.
Resumo:
There is general agreement within the scientific community in considering Biology as the science with more potential to develop in the XXI century. This is due to several reasons, but probably the most important one is the state of development of the rest of experimental and technological sciences. In this context, there are a very rich variety of mathematical tools, physical techniques and computer resources that permit to do biological experiments that were unbelievable only a few years ago. Biology is nowadays taking advantage of all these newly developed technologies, which are been applied to life sciences opening new research fields and helping to give new insights in many biological problems. Consequently, biologists have improved a lot their knowledge in many key areas as human function and human diseases. However there is one human organ that is still barely understood compared with the rest: The human brain. The understanding of the human brain is one of the main challenges of the XXI century. In this regard, it is considered a strategic research field for the European Union and the USA. Thus, there is a big interest in applying new experimental techniques for the study of brain function. Magnetoencephalography (MEG) is one of these novel techniques that are currently applied for mapping the brain activity1. This technique has important advantages compared to the metabolic-based brain imagining techniques like Functional Magneto Resonance Imaging2 (fMRI). The main advantage is that MEG has a higher time resolution than fMRI. Another benefit of MEG is that it is a patient friendly clinical technique. The measure is performed with a wireless set up and the patient is not exposed to any radiation. Although MEG is widely applied in clinical studies, there are still open issues regarding data analysis. The present work deals with the solution of the inverse problem in MEG, which is the most controversial and uncertain part of the analysis process3. This question is addressed using several variations of a new solving algorithm based in a heuristic method. The performance of those methods is analyzed by applying them to several test cases with known solutions and comparing those solutions with the ones provided by our methods.
Resumo:
Este proyecto trata de completar un estudio sobre la viabilidad de una instalación de turbina de eje vertical, en una azotea de un edificio de 6 plantas en el centro de la ciudad de Madrid. Está basado en la comunidad de vecinos de la calle de Lagasca 106 de Madrid, y se realiza de forma global, con objeto de que sirva como ejemplo a futuros estudios a realizar en esta área, incluyendo todas las dificultades y problemas que este tipo de proyectos muestran en su viabilidad. Los aspectos que vamos a abordar son: Demostración de que una turbina de eje vertical es más indicada e idónea para estos casos que una turbina de eje horizontal. Capacidad de generación eléctrica de la instalación que proponemos. Problemas asociados con la actual legislación. Problemas relacionados con la instalación eléctrica: el inversor de corriente, la decisión de utilizar un sistema de baterías, conectar el aerogenerador a la red o buscar un sistema mixto. La viabilidad económica de la instalación. ABSTRACT This project tries to complete a study on the viability of a vertical axis turbines installation in the roof of a “standard” 6 floors building in the center of Madrid. Besides, the project is based on the building situated in Lagasca 106, it pretends to be done in a “global” mode, in order to be an example of future projects, and include as many usual problems and items that this kind of projects could have to afford. These problems and issues are: the substantiation of the choice of the vertical axis turbine instead of a usual horizontal axis turbine, the model and the power capacity of this turbine. The turbines installation energy saving capacity. Problems associated to the legislation that we may to afford. And problems related to the electric installation, such us, transformer associated to the turbine, the decision of link the turbine with batteries or joining it directly to the building electric system. Also we have to set a programming system in order to monitor the different situations that the turbine has to work
Resumo:
Los conjuntos borrosos de tipo 2 (T2FSs) fueron introducidos por L.A. Zadeh en 1975 [65], como una extensión de los conjuntos borrosos de tipo 1 (FSs). Mientras que en estos últimos el grado de pertenencia de un elemento al conjunto viene determinado por un valor en el intervalo [0, 1], en el caso de los T2FSs el grado de pertenencia de un elemento es un conjunto borroso en [0,1], es decir, un T2FS queda determinado por una función de pertenencia μ : X → M, donde M = [0, 1][0,1] = Map([0, 1], [0, 1]), es el conjunto de las funciones de [0,1] en [0,1] (ver [39], [42], [43], [61]). Desde que los T2FSs fueron introducidos, se han generalizado a dicho conjunto (ver [39], [42], [43], [61], por ejemplo), a partir del “Principio de Extensión” de Zadeh [65] (ver Teorema 1.1), muchas de las definiciones, operaciones, propiedades y resultados obtenidos en los FSs. Sin embargo, como sucede en cualquier área de investigación, quedan muchas lagunas y problemas abiertos que suponen un reto para cualquiera que quiera hacer un estudio profundo en este campo. A este reto se ha dedicado el presente trabajo, logrando avances importantes en este sentido de “rellenar huecos” existentes en la teoría de los conjuntos borrosos de tipo 2, especialmente en las propiedades de autocontradicción y N-autocontradicción, y en las operaciones de negación, t-norma y t-conorma sobre los T2FSs. Cabe destacar que en [61] se justifica que las operaciones sobre los T2FSs (Map(X,M)) se pueden definir de forma natural a partir de las operaciones sobre M, verificando las mismas propiedades. Por tanto, por ser más fácil, en el presente trabajo se toma como objeto de estudio a M, y algunos de sus subconjuntos, en vez de Map(X,M). En cuanto a la operación de negación, en el marco de los conjuntos borrosos de tipo 2 (T2FSs), usualmente se emplea para representar la negación en M, una operación asociada a la negación estándar en [0,1]. Sin embargo, dicha operación no verifica los axiomas que, intuitivamente, debe verificar cualquier operación para ser considerada negación en el conjunto M. En este trabajo se presentan los axiomas de negación y negación fuerte en los T2FSs. También se define una operación asociada a cualquier negación suprayectiva en [0,1], incluyendo la negación estándar, y se estudia, junto con otras propiedades, si es negación y negación fuerte en L (conjunto de las funciones de M normales y convexas). Además, se comprueba en qué condiciones se cumplen las leyes de De Morgan para un extenso conjunto de pares de operaciones binarias en M. Por otra parte, las propiedades de N-autocontradicción y autocontradicción, han sido suficientemente estudiadas en los conjuntos borrosos de tipo 1 (FSs) y en los conjuntos borrosos intuicionistas de Atanassov (AIFSs). En el presente trabajo se inicia el estudio de las mencionadas propiedades, dentro del marco de los T2FSs cuyos grados de pertenencia están en L. En este sentido, aquí se extienden los conceptos de N-autocontradicción y autocontradicción al conjunto L, y se determinan algunos criterios para verificar tales propiedades. En cuanto a otras operaciones, Walker et al. ([61], [63]) definieron dos familias de operaciones binarias sobre M, y determinaron que, bajo ciertas condiciones, estas operaciones son t-normas (normas triangulares) o t-conormas sobre L. En este trabajo se introducen operaciones binarias sobre M, unas más generales y otras diferentes a las dadas por Walker et al., y se estudian varias propiedades de las mismas, con el objeto de deducir nuevas t-normas y t-conormas sobre L. ABSTRACT Type-2 fuzzy sets (T2FSs) were introduced by L.A. Zadeh in 1975 [65] as an extension of type-1 fuzzy sets (FSs). Whereas for FSs the degree of membership of an element of a set is determined by a value in the interval [0, 1] , the degree of membership of an element for T2FSs is a fuzzy set in [0,1], that is, a T2FS is determined by a membership function μ : X → M, where M = [0, 1][0,1] is the set of functions from [0,1] to [0,1] (see [39], [42], [43], [61]). Later, many definitions, operations, properties and results known on FSs, have been generalized to T2FSs (e.g. see [39], [42], [43], [61]) by employing Zadeh’s Extension Principle [65] (see Theorem 1.1). However, as in any area of research, there are still many open problems which represent a challenge for anyone who wants to make a deep study in this field. Then, we have been dedicated to such challenge, making significant progress in this direction to “fill gaps” (close open problems) in the theory of T2FSs, especially on the properties of self-contradiction and N-self-contradiction, and on the operations of negations, t-norms (triangular norms) and t-conorms on T2FSs. Walker and Walker justify in [61] that the operations on Map(X,M) can be defined naturally from the operations onMand have the same properties. Therefore, we will work onM(study subject), and some subsets of M, as all the results are easily and directly extensible to Map(X,M). About the operation of negation, usually has been employed in the framework of T2FSs, a operation associated to standard negation on [0,1], but such operation does not satisfy the negation axioms on M. In this work, we introduce the axioms that a function inMshould satisfy to qualify as a type-2 negation and strong type-2 negation. Also, we define a operation on M associated to any suprajective negation on [0,1], and analyse, among others properties, if such operation is negation or strong negation on L (all normal and convex functions of M). Besides, we study the De Morgan’s laws, with respect to some binary operations on M. On the other hand, The properties of self-contradiction and N-self-contradiction have been extensively studied on FSs and on the Atanassov’s intuitionistic fuzzy sets (AIFSs). Thereon, in this research we begin the study of the mentioned properties on the framework of T2FSs. In this sense, we give the definitions about self-contradiction and N-self-contradiction on L, and establish the criteria to verify these properties on L. Respect to the t-norms and t-conorms, Walker et al. ([61], [63]) defined two families of binary operations on M and found that, under some conditions, these operations are t-norms or t-conorms on L. In this work we introduce more general binary operations on M than those given by Walker et al. and study which are the minimum conditions necessary for these operations satisfy each of the axioms of the t-norm and t-conorm.
Resumo:
Los fundamentos de la Teoría de la Decisión Bayesiana proporcionan un marco coherente en el que se pueden resolver los problemas de toma de decisiones. La creciente disponibilidad de ordenadores potentes está llevando a tratar problemas cada vez más complejos con numerosas fuentes de incertidumbre multidimensionales; varios objetivos conflictivos; preferencias, metas y creencias cambiantes en el tiempo y distintos grupos afectados por las decisiones. Estos factores, a su vez, exigen mejores herramientas de representación de problemas; imponen fuertes restricciones cognitivas sobre los decisores y conllevan difíciles problemas computacionales. Esta tesis tratará estos tres aspectos. En el Capítulo 1, proporcionamos una revisión crítica de los principales métodos gráficos de representación y resolución de problemas, concluyendo con algunas recomendaciones fundamentales y generalizaciones. Nuestro segundo comentario nos lleva a estudiar tales métodos cuando sólo disponemos de información parcial sobre las preferencias y creencias del decisor. En el Capítulo 2, estudiamos este problema cuando empleamos diagramas de influencia (DI). Damos un algoritmo para calcular las soluciones no dominadas en un DI y analizamos varios conceptos de solución ad hoc. El último aspecto se estudia en los Capítulos 3 y 4. Motivado por una aplicación de gestión de embalses, introducimos un método heurístico para resolver problemas de decisión secuenciales. Como muestra resultados muy buenos, extendemos la idea a problemas secuenciales generales y cuantificamos su bondad. Exploramos después en varias direcciones la aplicación de métodos de simulación al Análisis de Decisiones. Introducimos primero métodos de Monte Cario para aproximar el conjunto no dominado en problemas continuos. Después, proporcionamos un método de Monte Cario basado en cadenas de Markov para problemas con información completa con estructura general: las decisiones y las variables aleatorias pueden ser continuas, y la función de utilidad puede ser arbitraria. Nuestro esquema es aplicable a muchos problemas modelizados como DI. Finalizamos con un capítulo de conclusiones y problemas abiertos.---ABSTRACT---The foundations of Bayesian Decisión Theory provide a coherent framework in which decisión making problems may be solved. With the advent of powerful computers and given the many challenging problems we face, we are gradually attempting to solve more and more complex decisión making problems with high and multidimensional uncertainty, múltiple objectives, influence of time over decisión tasks and influence over many groups. These complexity factors demand better representation tools for decisión making problems; place strong cognitive demands on the decison maker judgements; and lead to involved computational problems. This thesis will deal with these three topics. In recent years, many representation tools have been developed for decisión making problems. In Chapter 1, we provide a critical review of most of them and conclude with recommendations and generalisations. Given our second query, we could wonder how may we deal with those representation tools when there is only partial information. In Chapter 2, we find out how to deal with such a problem when it is structured as an influence diagram (ID). We give an algorithm to compute nondominated solutions in ID's and analyse several ad hoc solution concepts.- The last issue is studied in Chapters 3 and 4. In a reservoir management case study, we have introduced a heuristic method for solving sequential decisión making problems. Since it shows very good performance, we extend the idea to general problems and quantify its goodness. We explore then in several directions the application of simulation based methods to Decisión Analysis. We first introduce Monte Cario methods to approximate the nondominated set in continuous problems. Then, we provide a Monte Cario Markov Chain method for problems under total information with general structure: decisions and random variables may be continuous, and the utility function may be arbitrary. Our scheme is applicable to many problems modeled as IDs. We conclude with discussions and several open problems.
Resumo:
When dealing with the design of a high-speed train, a multiobjective shape optimization problem is formulated, as these vehicles are object of many aerodynamic problems which are known to be in conflict. More mobility involves an increase in both the cruise speed and lightness, and these requirements directly influence the stability and the ride comfort of the passengers when the train is subjected to a side wind. Thus, crosswind stability plays a more relevant role among the aerodynamic objectives to be optimized. An extensive research activity is observed on aerodynamic response in crosswind conditions.
Resumo:
El documento presentado contiene una aproximación a algunos de los diversos problemas actuales existentes en el campo de la robótica paralela. Primeramente se hace una propuesta para el cálculo de los parámetros estructurales de los robots paralelos, mediante el desarrollo de una metodología que combina las herramientas del estudio de mecanismos con el álgebra lineal; en una segunda sección se propone la solución del problema geométrico directo a partir de la definición de ecuaciones de restricción y su respectiva solución usando métodos numéricos, así como la solución para el problema geométrico inverso; en la tercera parte se aborda el problema dinámico tanto directo como inverso y su solución a partir de una metodología basada en el método de Kane o de trabajos virtuales. Para las propuestas metodológicas expuestas se han desarrollado ejemplos de aplicación tanto teóricos como prácticos (simulaciones y pruebas físicas), donde se demuestra su alcance y desempeño, mediante su utilización en múltiples configuraciones para manipuladores paralelos, entre los que se destacan la plataforma Stewart Gough, y el 3-RRR. Todo con el objetivo de extender su aplicación en futuros trabajos de investigación en el área. ABSTRACT The document presented below provides an approach to some of the many current problems existing in the field of parallel robotics. First is maked a proposal for calculating the structural parameters of the parallel robots, through developing a methodology that combines tools to study mechanisms with the linear algebra; a second section contains a direct geometrical problem solution from the definition of constraint equations and their respective solution using numerical methods, as well as the solution to the inverse geometric problem; in the third part, both, direct and inverse dynamic problem and its solution based on methodology Kane or the method of virtual work are propossed. For each of the exposed methodological proposals they were developed examples of both theoretical and practical application (simulations and physical tests), where its scope and performance is demonstrated by its use in multiple configurations for parallel manipulators, among which stand out the platform Stewart Gough, and 3-RRR. All with the goal of extending its application in future research in the area.
Resumo:
El empleo de refuerzos de FRP en vigas de hormigón armado es cada vez más frecuente por sus numerosas ventajas frente a otros métodos más tradicionales. Durante los últimos años, la técnica FRP-NSM, consistente en introducir barras de FRP sobre el recubrimiento de una viga de hormigón, se ha posicionado como uno de los mejores métodos de refuerzo y rehabilitación de estructuras de hormigón armado, tanto por su facilidad de montaje y mantenimiento, como por su rendimiento para aumentar la capacidad resistente de dichas estructuras. Si bien el refuerzo a flexión ha sido ampliamente desarrollado y estudiado hasta la fecha, no sucede lo mismo con el refuerzo a cortante, debido principalmente a su gran complejidad. Sin embargo, se debería dedicar más estudio a este tipo de refuerzo si se pretenden conservar los criterios de diseño en estructuras de hormigón armado, los cuales están basados en evitar el fallo a cortante por sus consecuencias catastróficas Esta ausencia de información y de normativa es la que justifica esta tesis doctoral. En este pro-yecto se van a desarrollar dos metodologías alternativas, que permiten estimar la capacidad resistente de vigas de hormigón armado, reforzadas a cortante mediante la técnica FRP-NSM. El primer método aplicado consiste en la implementación de una red neuronal artificial capaz de predecir adecuadamente la resistencia a cortante de vigas reforzadas con este método a partir de experimentos anteriores. Asimismo, a partir de la red se han llevado a cabo algunos estudios a fin de comprender mejor la influencia real de algunos parámetros de la viga y del refuerzo sobre la resistencia a cortante con el propósito de lograr diseños más seguros de este tipo de refuerzo. Una configuración óptima de la red requiere discriminar adecuadamente de entre los numerosos parámetros (geométricos y de material) que pueden influir en el compor-tamiento resistente de la viga, para lo cual se han llevado a cabo diversos estudios y pruebas. Mediante el segundo método, se desarrolla una ecuación de proyecto que permite, de forma sencilla, estimar la capacidad de vigas reforzadas a cortante con FRP-NSM, la cual podría ser propuesta para las principales guías de diseño. Para alcanzar este objetivo, se plantea un pro-blema de optimización multiobjetivo a partir de resultados de ensayos experimentales llevados a cabo sobre vigas de hormigón armado con y sin refuerzo de FRP. El problema multiobjetivo se resuelve mediante algoritmos genéticos, en concreto el algoritmo NSGA-II, por ser más apropiado para problemas con varias funciones objetivo que los métodos de optimización clásicos. Mediante una comparativa de las predicciones realizadas con ambos métodos y de los resulta-dos de ensayos experimentales se podrán establecer las ventajas e inconvenientes derivadas de la aplicación de cada una de las dos metodologías. Asimismo, se llevará a cabo un análisis paramétrico con ambos enfoques a fin de intentar determinar la sensibilidad de aquellos pa-rámetros más sensibles a este tipo de refuerzo. Finalmente, se realizará un análisis estadístico de la fiabilidad de las ecuaciones de diseño deri-vadas de la optimización multiobjetivo. Con dicho análisis se puede estimar la capacidad resis-tente de una viga reforzada a cortante con FRP-NSM dentro de un margen de seguridad espe-cificado a priori. ABSTRACT The use of externally bonded (EB) fibre-reinforced polymer (FRP) composites has gained acceptance during the last two decades in the construction engineering community, particularly in the rehabilitation of reinforced concrete (RC) structures. Currently, to increase the shear resistance of RC beams, FRP sheets are externally bonded (EB-FRP) and applied on the external side surface of the beams to be strengthened with different configurations. Of more recent application, the near-surface mounted FRP bar (NSM-FRP) method is another technique successfully used to increase the shear resistance of RC beams. In the NSM method, FRP rods are embedded into grooves intentionally prepared in the concrete cover of the side faces of RC beams. While flexural strengthening has been widely developed and studied so far, the same doesn´t occur to shearing strength mainly due to its great complexity. Nevertheless, if design criteria are to be preserved more research should be done to this sort of strength, which are based on avoiding shear failure and its catastrophic consequences. However, in spite of this, accurately calculating the shear capacity of FRP shear strengthened RC beams remains a complex challenge that has not yet been fully resolved due to the numerous variables involved in the procedure. The objective of this Thesis is to develop methodologies to evaluate the capacity of FRP shear strengthened RC beams by dealing with the problem from a different point of view to the numerical modeling approach by using artificial intelligence techniques. With this purpose two different approaches have been developed: one concerned with the use of artificial neural networks and the other based on the implementation of an optimization approach developed jointly with the use of artificial neural networks (ANNs) and solved with genetic algorithms (GAs). With these approaches some of the difficulties concerned regarding the numerical modeling can be overcome. As an alternative tool to conventional numerical techniques, neural networks do not provide closed form solutions for modeling problems but do, however, offer a complex and accurate solution based on a representative set of historical examples of the relationship. Furthermore, they can adapt solutions over time to include new data. On the other hand, as a second proposal, an optimization approach has also been developed to implement simple yet accurate shear design equations for this kind of strengthening. This approach is developed in a multi-objective framework by considering experimental results of RC beams with and without NSM-FRP. Furthermore, the results obtained with the previous scheme based on ANNs are also used as a filter to choose the parameters to include in the design equations. Genetic algorithms are used to solve the optimization problem since they are especially suitable for solving multi-objective problems when compared to standard optimization methods. The key features of the two proposed procedures are outlined and their performance in predicting the capacity of NSM-FRP shear strengthened RC beams is evaluated by comparison with results from experimental tests and with predictions obtained using a simplified numerical model. A sensitivity study of the predictions of both models for the input parameters is also carried out.
Resumo:
The author participated in the 6 th EU Framework Project ―Q-pork Chains (FP6-036245-2)‖ from 2007 to 2009. With understanding of work reports from China and other countries, it is found that compared with other countries, China has great problems in pork quality and safety. By comparing the pork chain management between China and Spain, It is found that the difference in governance structure is one of the main differences in pork chain management between Spain and China. In China, spot-market relationship still dominates governance structure of pork chain, especially between the numerous house-hold pig holders and the great number of small slaughters. While in Spain, chain agents commonly apply cooperatives or integrations to cooperate. It also has been proven by recent studies, that in quality management at the chain level that supply chain integration has a direct effect on quality management practices (Han, 2010). Therefore, the author started to investigate the governance structure choices in supply chain management. And it has been set as the first research objective, which is to explain the governance structure choices process and the influencing factors in supply chain management, analyzing the pork chains cases in Spain and in China. During the further investigation, the author noticed the international trade of pork between Spain and China is not smooth since the signature of bi-lateral agreement on pork trade in 2007. Thus, another objective of the research is to find and solve the problems exist in the international pork chain between Spain and China. For the first objective, to explain the governance structure choices in supply chain management, the thesis conducts research in three main sections. 10 First of all, the thesis gives a literature overview in chapter two on Supply Chain Management (SCM), agri-food chain management and pork chain management. It concludes that SCM is a systems approach to view the supply chains as a whole, and to manage the total flow of goods inventory from the supplier to the ultimate customer. It includes the bi-directional flow of products (materials and services) and information, and the associated managerial and operational activities. And it also is a customer focus to create unique and individual source of customer value with an appropriate use of resources, leading to customer satisfaction and building competitive chain advantages. Agri-food chain management and pork chain management are applications of SCM in agri-food sector and pork sector respectively. Then, the research gives a comparative study in chapter three in the pork chain and pork chain management between Spain and China. Many differences are found, while the main difference is governance structure in pork chain management. Furthermore, the author gives an empirical study on governance structure choice in chapter five. It is concluded that governance structure of supply chain consists of a collection of rules/institutions/constraints structuring the transactions between the various stakeholders. Based on the overview on literatures closely related with governance structure, such as transaction cost economics, transaction value analysis and resource-based view theories, seven hypotheses are proposed, which are: Hypothesis 1: Transaction cost has positive relationship with governance structure choice Hypothesis 2: Uncertainty has positive relationship with transaction cost; higher uncertainty exerts high transaction cost Hypothesis 3: The relationship between asset specificity and transaction cost is positive Hypothesis 4: Collaboration advantages and governance structure choice have positive relationship11 Hypothesis 5: Willingness to collaborate has positive relationship with collaboration advantages Hypothesis 6: Capability to collaborate has positive relationship with collaboration advantages Hypothesis 7: Uncertainty has negative effect on collaboration advantages It is noted that as transaction cost value is negative, the transaction cost mentioned in the hypotheses is its absolute value. To test the seven hypotheses, Structural Equation Model (SEM) is applied and data collected from 350 pork slaughtering and processing companies in Jiangsu, Shandong and Henan Provinces in China is used. Based on the empirical SEM model and its results, the seven hypotheses are proved. The author generates several conclusions accordingly. It is found that the governance structure choice of the chain not only depends on transaction cost, it also depends on collaboration advantages. Exchange partners establish more stable and more intense relationship to reduce transaction cost and to maximize collaboration advantages. ―Collaboration advantages‖ in this thesis is defined as the joint value achieved through transaction (mutual activities) of agents in supply chains. This value forms as improvements, mainly in mutual logistics systems, cash response, information exchange, technological improvements and innovative improvements and quality management improvements, etc. Governance structure choice is jointly decided by transaction cost and collaboration advantages. Chain agents take different governance structures to coordinate in order to decrease their transaction cost and to increase their collaboration advantages. In China´s pork chain case, spot market relationship dominates the governance structure among the numerous backyard pig farmer and small family slaughterhouse 12 as they are connected by acquaintance relationship and the transaction cost in turn is low. Their relationship is reliable as they know each other in the neighborhood; as a result, spot market relationship is suitable for their exchange. However, the transaction between large-scale slaughtering and processing industries and small-scale pig producers is becoming difficult. The information hold back behavior and hold-up behavior of small-scale pig producers increase transaction cost between them and large-scale slaughtering and processing industries. Thus, through the more intense and stable relationship between processing industries and pig producers, processing industries reduce the transaction cost and improve the collaboration advantages with their chain partners, in which quality and safety collaboration advantages be increased, meaning that processing industries are able to provide consumers products with better quality and higher safety. It is also drawn that transaction cost is influenced mainly by uncertainty and asset specificity, which is in line with new institutional economics theories developed by Williamson O. E. In China´s pork chain case, behavioral uncertainty is created by the hold-up behaviors of great numbers of small pig producers, while big slaughtering and processing industries having strong asset specificity. On the other hand, ―collaboration advantages‖ is influenced by chain agents´ willingness to collaborate and chain agents´ capabilities to cooperate. With the fast growth of big scale slaughtering and processing industries, they are more willing to know and make effort to cooperate with their chain members, and they are more capable to create joint value together with other chain agents. Therefore, they are now the main chain agents who drive more intense and stable governance structure in China‘s pork chain. For the other objective, to find and solve the problems in the international pork chain between Spain and China, the research gives an analysis in chapter four on the 13 international pork chain. This study gives explanations why the international trade of pork between Spain and China is not sufficient from the chain perspective. It is found that the first obstacle is the high quality and safety requirement set by Chinese government. It makes the Spanish companies difficult to get authorities to export. Other aspects, such as Spanish pork is not competitive in price compared with other countries such as Denmark, United States, Canada, etc., Chinese consumers do not have sufficient information on Spanish pork products, are also important reasons that Spain does not export great quantity of pork products to China. It is concluded that China´s government has too much concern on the quality and safety requirements to Spanish pork products, which makes trade difficult to complete. The two countries need to establish a more stable and intense trade relationship. They also should make the information exchange sufficient and efficient and try to break trade barriers. Spanish companies should consider proper price strategies to win the Chinese pork market
Resumo:
Some requirements for engineering programmes, such as an ability to use the techniques, skills and modern engineering tools necessary for engineering practice, as well as an understanding of professional and ethical responsibility or an ability to communicate effectively, need new activities designed for measuring students’ progress. Negotiations take place continuously at any stage of a project and, so, the ability of engineers and managers to effectively carry out a negotiation is crucial for the success or failure of projects and businesses. Since it involves communication between individuals motivated to come together in an agreement for mutual benefit, it can be used to enhance these personal abilities. The main objective of this study was to evaluate the adequacy of mixing playing sessions and theory to maximise the students’ strategic vision in combination with negotiating skills. Results show that the combination of playing with theoretical training teaches students to strategise through analysis and discussion of alternatives. The outcome is then more optimised.