901 resultados para Multi-Criteria Problems


Relevância:

90.00% 90.00%

Publicador:

Resumo:

En aquesta tesi es solucionen problemes de visibilitat i proximitat sobre superfícies triangulades considerant elements generalitzats. Com a elements generalitzats considerem: punts, segments, poligonals i polígons. Les estrategies que proposem utilitzen algoritmes de geometria computacional i hardware gràfic. Comencem tractant els problemes de visibilitat sobre models de terrenys triangulats considerant un conjunt d'elements de visió generalitzats. Es presenten dos mètodes per obtenir, de forma aproximada, mapes de multi-visibilitat. Un mapa de multi-visibilitat és la subdivisió del domini del terreny que codifica la visibilitat d'acord amb diferents criteris. El primer mètode, de difícil implementació, utilitza informació de visibilitat exacte per reconstruir de forma aproximada el mapa de multi-visibilitat. El segon, que va acompanyat de resultats d'implementació, obté informació de visibilitat aproximada per calcular i visualitzar mapes de multi-visibilitat discrets mitjançant hardware gràfic. Com a aplicacions es resolen problemes de multi-visibilitat entre regions i es responen preguntes sobre la multi-visibilitat d'un punt o d'una regió. A continuació tractem els problemes de proximitat sobre superfícies polièdriques triangulades considerant seus generalitzades. Es presenten dos mètodes, amb resultats d'implementació, per calcular distàncies des de seus generalitzades sobre superfícies polièdriques on hi poden haver obstacles generalitzats. El primer mètode calcula, de forma exacte, les distàncies definides pels camins més curts des de les seus als punts del poliedre. El segon mètode calcula, de forma aproximada, distàncies considerant els camins més curts sobre superfícies polièdriques amb pesos. Com a aplicacions, es calculen diagrames de Voronoi d'ordre k, i es resolen, de forma aproximada, alguns problemes de localització de serveis. També es proporciona un estudi teòric sobre la complexitat dels diagrames de Voronoi d'ordre k d'un conjunt de seus generalitzades en un poliedre sense pesos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A fast Knowledge-based Evolution Strategy, KES, for the multi-objective minimum spanning tree, is presented. The proposed algorithm is validated, for the bi-objective case, with an exhaustive search for small problems (4-10 nodes), and compared with a deterministic algorithm, EPDA and NSGA-II for larger problems (up to 100 nodes) using benchmark hard instances. Experimental results show that KES finds the true Pareto fronts for small instances of the problem and calculates good approximation Pareto sets for larger instances tested. It is shown that the fronts calculated by YES are superior to NSGA-II fronts and almost as good as those established by EPDA. KES is designed to be scalable to multi-objective problems and fast due to its small complexity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The evaluation of EU policy in the area of rural land use management often encounters problems of multiple and poorly articulated objectives. Agri-environmental policy has a range of aims, including natural resource protection, biodiversity conservation and the protection and enhancement of landscape quality. Forestry policy, in addition to production and environmental objectives, increasingly has social aims, including enhancement of human health and wellbeing, lifelong learning, and the cultural and amenity value of the landscape. Many of these aims are intangible, making them hard to define and quantify. This article describes two approaches for dealing with such situations, both of which rely on substantial participation by stakeholders. The first is the Agri-Environment Footprint Index, a form of multi-criteria participatory approach. The other, applied here to forestry, has been the development of ‘multi-purpose’ approaches to evaluation, which respond to the diverse needs of stakeholders through the use of mixed methods and a broad suite of indicators, selected through a participatory process. Each makes use of case studies and involves stakeholders in the evaluation process, thereby enhancing their commitment to the programmes and increasing their sustainability. Both also demonstrate more ‘holistic’ approaches to evaluation than the formal methods prescribed in the EU Common Monitoring and Evaluation Framework.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For many years, drainage design was mainly about providing sufficient network capacity. This traditional approach had been successful with the aid of computer software and technical guidance. However, the drainage design criteria had been evolving due to rapid population growth, urbanisation, climate change and increasing sustainability awareness. Sustainable drainage systems that bring benefits in addition to water management have been recommended as better alternatives to conventional pipes and storages. Although the concepts and good practice guidance had already been communicated to decision makers and public for years, network capacity still remains a key design focus in many circumstances while the additional benefits are generally considered secondary only. Yet, the picture is changing. The industry begins to realise that delivering multiple benefits should be given the top priority while the drainage service can be considered a secondary benefit instead. The shift in focus means the industry has to adapt to new design challenges. New guidance and computer software are needed to assist decision makers. For this purpose, we developed a new decision support system. The system consists of two main components – a multi-criteria evaluation framework for drainage systems and a multi-objective optimisation tool. Users can systematically quantify the performance, life-cycle costs and benefits of different drainage systems using the evaluation framework. The optimisation tool can assist users to determine combinations of design parameters such as the sizes, order and type of drainage components that maximise multiple benefits. In this paper, we will focus on the optimisation component of the decision support framework. The optimisation problem formation, parameters and general configuration will be discussed. We will also look at the sensitivity of individual variables and the benchmark results obtained using common multi-objective optimisation algorithms. The work described here is the output of an EngD project funded by EPSRC and XP Solutions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Choosing properly and efficiently a supplier has been challenging practitioners and academics since 1960’s. Since then, countless studies had been performed and relevant changes in the business scenario were considered such as global sourcing, quality-orientation, just-in-time practices. It is almost consensus that quality should be the selection driver, however, some polemical findings questioned this general agreement. Therefore, one of the objectives of the study was to identify the supplier selection criteria and bring this discussion back again. Moreover, Dickson (1966) suggested existing business relationship as selection criterion, then it was reviewed the importance of business relationship for the company and noted a set of potential negative effects that could rise from it. By considering these side effects of relationship, this research aimed to investigate how the relationship could influence the supplier selection and how its harmful effects could affect the selection process. The impact of this phenomenon was investigated cross-nationally. The research strategy adopted was a controlled experiment via vignette combined with discrete choice analysis. The data collections were performed in China and Brazil. By examining the results, it could be drawn five major findings. First, when purchasers were asked to declare their supplier selection priorities, quality was stated as the most important independently of country and relationship. This result was consistent with diverse studies since 60’s. However, when purchasers were exposed to a multi-criteria trade-off situation, their actual selection priorities deviate from what they had declared. In the actual decision-making without influence of buyer-supplier relationship, Brazilian purchasers focused on price and Chinese buyers prioritized delivery then price. This observation reinforced some controversial prior studies of Verma & Pullman (1998) and Hirakubo & Kublin (1998). Second, through the introduction of the buyer-supplier relationship (operationalized via relational capital) in the supplier selection process, this research extended the existing studies and found that Brazilian buyers still focused on price. The relationship became just another criterion for supplier selection such as quality and delivery. However, from the Chinese sample, the results suggested that quality was totally discarded and the decision was majorly made through price and relationship. The third finding suggested that relational capital could legitimate the quality and sustainability of the supplier and replaces these selection criteria and made the decisional task less complex. Additionally, with the relational capital, the decision-makings were associated to few biases such as availability cognition, commitment, confirmatory and perceived biases. By analyzing the purchasers’ behavior, relational capital inducted buyers of both countries to relax in their purchasing requirements (quality, delivery and sustainability) leading to potential negative effects. In the Brazilian sample, the phenomenon of willing to pay a higher price for a lower quality offer demonstrated to be a potential counterproductive and suboptimal decision. Finally, the last finding was associated to the cultural effect on the buyers’ decisions. From the outcome, it is possible to observe that if a purchaser’s cultural background is more relation-oriented, the more he will tend to use relational capital as a decision heuristic, thus, the purchaser will be more susceptible to the potential relationship’s side effects

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main goal of this dissertation is to develop a Multi Criteria Decision Aid Model to be used in Oils and Gas perforation rigs contracts choices. The developed model should permit the utilization of multiples criterions, covering problems that exist with models that mainly use the price of the contracts as its decision criterion. The AHP has been chosen because its large utilization, not only academic, but in many other areas, its simplicity of use and flexibility, and also fill all the requirements necessary to complete the task. The development of the model was conducted by interviews and surveys with one specialist in this specific area, who also acts as the main actor on the decision process. The final model consists in six criterions: Costs, mobility, automation, technical support, how fast the service could be concluded and availability to start the operations. Three rigs were chosen as possible solutions for the problem. The results reached by the utilizations of the model suggests that the utilization of AHP as a decision support system in this kind of situation is possible, allowing a simplifications of the problem, and also it s a useful tool to improve every one involved on the process s knowledge about the problem subject, and its possible solutions

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Anthropic disturbances in watersheds, such as inappropriate building development, disorderly land occupation and unplanned land use, may strengthen the sediment yield and the inflow into the estuary, leading to siltation, changes in the reach channel conformation, and ecosystem/water quality problems. Faced with such context, this study aims to assess the applicability of SWAT model to estimate, even in a preliminary way, the sediment yield distribution along the Potengi River watershed, as well as its contribution to the estuary. Furthermore, an assessment of its erosion susceptibility was used for comparison. The susceptibility map was developed by overlaying rainfall erosivity, soil erodibility, the slope of the terrain and land cover. In order to overlap these maps, a multi-criteria analysis through AHP method was applied. The SWAT was run using a five year period (1997-2001), considering three different scenarios based on different sorts of human interference: a) agriculture; b) pasture; and c) no interference (background). Results were analyzed in terms of surface runoff, sediment yield and their propagation along each river section, so that it was possible to find that the regions in the extreme west of the watershed and in the downstream portions returned higher values of sediment yield, reaching respectively 2.8 e 5.1 ton/ha.year, whereas central areas, which were less susceptible, returned the lowest values, never more than 0.7 ton/ha.ano. It was also noticed that in the west sub-watersheds, where one can observe the headwaters, sediment yield was naturally forced by high declivity and weak soils. In another hand, results suggest that the eastern part would not contribute to the sediment inflow into the estuary in a significant way, and the larger part of the sediment yield in that place is due to anthropic activities. For the central region, the analysis of sediment propagation indicates deposition predominance in opposition to transport. Thus, it s not expected that isolated rain storms occurring in the upstream river portions would significantly provide the estuary with sediment. Because the model calibration process hasn t been done yet, it becomes essential to emphasize that values presented here as results should not be applied for pratical aims. Even so, this work warns about the risks of a growth in the alteration of natural land cover, mainly in areas closer to the headwaters and in the downstream Potengi River

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pós-graduação em Engenharia de Produção - FEG

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Pós-graduação em Engenharia de Produção - FEG

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We introduce a dominance intensity measuring method to derive a ranking of alternatives to deal with incomplete information in multi-criteria decision-making problems on the basis of multi-attribute utility theory (MAUT) and fuzzy sets theory. We consider the situation where there is imprecision concerning decision-makers’ preferences, and imprecise weights are represented by trapezoidal fuzzy weights.The proposed method is based on the dominance values between pairs of alternatives. These values can be computed by linear programming, as an additive multi-attribute utility model is used to rate the alternatives. Dominance values are then transformed into dominance intensity measures, used to rank the alternatives under consideration. Distances between fuzzy numbers based on the generalization of the left and right fuzzy numbers are utilized to account for fuzzy weights. An example concerning the selection of intervention strategies to restore an aquatic ecosystem contaminated by radionuclides illustrates the approach. Monte Carlo simulation techniques have been used to show that the proposed method performs well for different imprecision levels in terms of a hit ratio and a rank-order correlation measure.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La familia de algoritmos de Boosting son un tipo de técnicas de clasificación y regresión que han demostrado ser muy eficaces en problemas de Visión Computacional. Tal es el caso de los problemas de detección, de seguimiento o bien de reconocimiento de caras, personas, objetos deformables y acciones. El primer y más popular algoritmo de Boosting, AdaBoost, fue concebido para problemas binarios. Desde entonces, muchas han sido las propuestas que han aparecido con objeto de trasladarlo a otros dominios más generales: multiclase, multilabel, con costes, etc. Nuestro interés se centra en extender AdaBoost al terreno de la clasificación multiclase, considerándolo como un primer paso para posteriores ampliaciones. En la presente tesis proponemos dos algoritmos de Boosting para problemas multiclase basados en nuevas derivaciones del concepto margen. El primero de ellos, PIBoost, está concebido para abordar el problema descomponiéndolo en subproblemas binarios. Por un lado, usamos una codificación vectorial para representar etiquetas y, por otro, utilizamos la función de pérdida exponencial multiclase para evaluar las respuestas. Esta codificación produce un conjunto de valores margen que conllevan un rango de penalizaciones en caso de fallo y recompensas en caso de acierto. La optimización iterativa del modelo genera un proceso de Boosting asimétrico cuyos costes dependen del número de etiquetas separadas por cada clasificador débil. De este modo nuestro algoritmo de Boosting tiene en cuenta el desbalanceo debido a las clases a la hora de construir el clasificador. El resultado es un método bien fundamentado que extiende de manera canónica al AdaBoost original. El segundo algoritmo propuesto, BAdaCost, está concebido para problemas multiclase dotados de una matriz de costes. Motivados por los escasos trabajos dedicados a generalizar AdaBoost al terreno multiclase con costes, hemos propuesto un nuevo concepto de margen que, a su vez, permite derivar una función de pérdida adecuada para evaluar costes. Consideramos nuestro algoritmo como la extensión más canónica de AdaBoost para este tipo de problemas, ya que generaliza a los algoritmos SAMME, Cost-Sensitive AdaBoost y PIBoost. Por otro lado, sugerimos un simple procedimiento para calcular matrices de coste adecuadas para mejorar el rendimiento de Boosting a la hora de abordar problemas estándar y problemas con datos desbalanceados. Una serie de experimentos nos sirven para demostrar la efectividad de ambos métodos frente a otros conocidos algoritmos de Boosting multiclase en sus respectivas áreas. En dichos experimentos se usan bases de datos de referencia en el área de Machine Learning, en primer lugar para minimizar errores y en segundo lugar para minimizar costes. Además, hemos podido aplicar BAdaCost con éxito a un proceso de segmentación, un caso particular de problema con datos desbalanceados. Concluimos justificando el horizonte de futuro que encierra el marco de trabajo que presentamos, tanto por su aplicabilidad como por su flexibilidad teórica. Abstract The family of Boosting algorithms represents a type of classification and regression approach that has shown to be very effective in Computer Vision problems. Such is the case of detection, tracking and recognition of faces, people, deformable objects and actions. The first and most popular algorithm, AdaBoost, was introduced in the context of binary classification. Since then, many works have been proposed to extend it to the more general multi-class, multi-label, costsensitive, etc... domains. Our interest is centered in extending AdaBoost to two problems in the multi-class field, considering it a first step for upcoming generalizations. In this dissertation we propose two Boosting algorithms for multi-class classification based on new generalizations of the concept of margin. The first of them, PIBoost, is conceived to tackle the multi-class problem by solving many binary sub-problems. We use a vectorial codification to represent class labels and a multi-class exponential loss function to evaluate classifier responses. This representation produces a set of margin values that provide a range of penalties for failures and rewards for successes. The stagewise optimization of this model introduces an asymmetric Boosting procedure whose costs depend on the number of classes separated by each weak-learner. In this way the Boosting procedure takes into account class imbalances when building the ensemble. The resulting algorithm is a well grounded method that canonically extends the original AdaBoost. The second algorithm proposed, BAdaCost, is conceived for multi-class problems endowed with a cost matrix. Motivated by the few cost-sensitive extensions of AdaBoost to the multi-class field, we propose a new margin that, in turn, yields a new loss function appropriate for evaluating costs. Since BAdaCost generalizes SAMME, Cost-Sensitive AdaBoost and PIBoost algorithms, we consider our algorithm as a canonical extension of AdaBoost to this kind of problems. We additionally suggest a simple procedure to compute cost matrices that improve the performance of Boosting in standard and unbalanced problems. A set of experiments is carried out to demonstrate the effectiveness of both methods against other relevant Boosting algorithms in their respective areas. In the experiments we resort to benchmark data sets used in the Machine Learning community, firstly for minimizing classification errors and secondly for minimizing costs. In addition, we successfully applied BAdaCost to a segmentation task, a particular problem in presence of imbalanced data. We conclude the thesis justifying the horizon of future improvements encompassed in our framework, due to its applicability and theoretical flexibility.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La estructura económica mundial, con centros de producción y consumo descentralizados y el consiguiente aumento en el tráfico de mercancías en todo el mundo, crea considerables problemas y desafíos para el sector del transporte de mercancías. Esta situación ha llevado al transporte marítimo a convertirse en el modo más económico y más adecuado para el transporte de mercancías a nivel global. De este modo, los puertos marítimos se configuran como nodos de importancia capital en la cadena de suministro al servir como enlace entre dos sistemas de transporte, el marítimo y el terrestre. El aumento de la actividad en los puertos marítimos produce tres efectos indeseables: el aumento de la congestión vial, la falta de espacio abierto en las instalaciones portuarias y un impacto ambiental significativo en los puertos marítimos. Los puertos secos nacen para favorecer la utilización de cada modo de transporte en los segmentos en que resultan más competitivos y para mitigar estos problemas moviendo parte de la actividad en el interior. Además, gracias a la implantación de puertos secos es posible discretizar cada uno de los eslabones de la cadena de transporte, permitiendo que los modos más contaminantes y con menor capacidad de transporte tengan itinerarios lo más cortos posible, o bien, sean utilizados únicamente para el transporte de mercancías de alto valor añadido. Así, los puertos secos se presentan como una oportunidad para fortalecer las soluciones intermodales como parte de una cadena integrada de transporte sostenible, potenciando el transporte de mercancías por ferrocarril. Sin embargo, su potencial no es aprovechado al no existir una metodología de planificación de la ubicación de uso sencillo y resultados claros para la toma de decisiones a partir de los criterios ingenieriles definidos por los técnicos. La decisión de dónde ubicar un puerto seco exige un análisis exhaustivo de toda la cadena logística, con el objetivo de transferir el mayor volumen de tráfico posible a los modos más eficientes desde el punto de vista energético, que son menos perjudiciales para el medio ambiente. Sin embargo, esta decisión también debe garantizar la sostenibilidad de la propia localización. Esta Tesis Doctoral, pretende sentar las bases teóricas para el desarrollo de una herramienta de Herramienta de Ayuda a la Toma de Decisiones que permita establecer la localización más adecuada para la construcción de puertos secos. Este primer paso es el desarrollo de una metodología de evaluación de la sostenibilidad y la calidad de las localizaciones de los puertos secos actuales mediante el uso de las siguientes técnicas: Metodología DELPHI, Redes Bayesianas, Análisis Multicriterio y Sistemas de Información Geográfica. Reconociendo que la determinación de la ubicación más adecuada para situar diversos tipos de instalaciones es un importante problema geográfico, con significativas repercusiones medioambientales, sociales, económicos, locacionales y de accesibilidad territorial, se considera un conjunto de 40 variables (agrupadas en 17 factores y estos, a su vez, en 4 criterios) que permiten evaluar la sostenibilidad de las localizaciones. El Análisis Multicriterio se utiliza como forma de establecer una puntuación a través de un algoritmo de scoring. Este algoritmo se alimenta a través de: 1) unas calificaciones para cada variable extraídas de información geográfica analizada con ArcGIS (Criteria Assessment Score); 2) los pesos de los factores obtenidos a través de un cuestionario DELPHI, una técnica caracterizada por su capacidad para alcanzar consensos en un grupo de expertos de muy diferentes especialidades: logística, sostenibilidad, impacto ambiental, planificación de transportes y geografía; y 3) los pesos de las variables, para lo que se emplean las Redes Bayesianas lo que supone una importante aportación metodológica al tratarse de una novedosa aplicación de esta técnica. Los pesos se obtienen aprovechando la capacidad de clasificación de las Redes Bayesianas, en concreto de una red diseñada con un algoritmo de tipo greedy denominado K2 que permite priorizar cada variable en función de las relaciones que se establecen en el conjunto de variables. La principal ventaja del empleo de esta técnica es la reducción de la arbitrariedad en la fijación de los pesos de la cual suelen adolecer las técnicas de Análisis Multicriterio. Como caso de estudio, se evalúa la sostenibilidad de los 10 puertos secos existentes en España. Los resultados del cuestionario DELPHI revelan una mayor importancia a la hora de buscar la localización de un Puerto Seco en los aspectos tenidos en cuenta en las teorías clásicas de localización industrial, principalmente económicos y de accesibilidad. Sin embargo, no deben perderse de vista el resto de factores, cuestión que se pone de manifiesto a través del cuestionario, dado que ninguno de los factores tiene un peso tan pequeño como para ser despreciado. Por el contrario, los resultados de la aplicación de Redes Bayesianas, muestran una mayor importancia de las variables medioambientales, por lo que la sostenibilidad de las localizaciones exige un gran respeto por el medio natural y el medio urbano en que se encuadra. Por último, la aplicación práctica refleja que la localización de los puertos secos existentes en España en la actualidad presenta una calidad modesta, que parece responder más a decisiones políticas que a criterios técnicos. Por ello, deben emprenderse políticas encaminadas a generar un modelo logístico colaborativo-competitivo en el que se evalúen los diferentes factores tenidos en cuenta en esta investigación. The global economic structure, with its decentralized production and the consequent increase in freight traffic all over the world, creates considerable problems and challenges for the freight transport sector. This situation has led shipping to become the most suitable and cheapest way to transport goods. Thus, ports are configured as nodes with critical importance in the logistics supply chain as a link between two transport systems, sea and land. Increase in activity at seaports is producing three undesirable effects: increasing road congestion, lack of open space in port installations and a significant environmental impact on seaports. These adverse effects can be mitigated by moving part of the activity inland. Implementation of dry ports is a possible solution and would also provide an opportunity to strengthen intermodal solutions as part of an integrated and more sustainable transport chain, acting as a link between road and railway networks. In this sense, implementation of dry ports allows the separation of the links of the transport chain, thus facilitating the shortest possible routes for the lowest capacity and most polluting means of transport. Thus, the decision of where to locate a dry port demands a thorough analysis of the whole logistics supply chain, with the objective of transferring the largest volume of goods possible from road to more energy efficient means of transport, like rail or short-sea shipping, that are less harmful to the environment. However, the decision of where to locate a dry port must also ensure the sustainability of the site. Thus, the main goal of this dissertation is to research the variables influencing the sustainability of dry port location and how this sustainability can be evaluated. With this objective, in this research we present a methodology for assessing the sustainability of locations by the use of Multi-Criteria Decision Analysis (MCDA) and Bayesian Networks (BNs). MCDA is used as a way to establish a scoring, whilst BNs were chosen to eliminate arbitrariness in setting the weightings using a technique that allows us to prioritize each variable according to the relationships established in the set of variables. In order to determine the relationships between all the variables involved in the decision, giving us the importance of each factor and variable, we built a K2 BN algorithm. To obtain the scores of each variable, we used a complete cartography analysed by ArcGIS. Recognising that setting the most appropriate location to place a dry port is a geographical multidisciplinary problem, with significant economic, social and environmental implications, we consider 41 variables (grouped into 17 factors) which respond to this need. As a case of study, the sustainability of all of the 10 existing dry ports in Spain has been evaluated. In this set of logistics platforms, we found that the most important variables for achieving sustainability are those related to environmental protection, so the sustainability of the locations requires a great respect for the natural environment and the urban environment in which they are framed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El empleo de refuerzos de FRP en vigas de hormigón armado es cada vez más frecuente por sus numerosas ventajas frente a otros métodos más tradicionales. Durante los últimos años, la técnica FRP-NSM, consistente en introducir barras de FRP sobre el recubrimiento de una viga de hormigón, se ha posicionado como uno de los mejores métodos de refuerzo y rehabilitación de estructuras de hormigón armado, tanto por su facilidad de montaje y mantenimiento, como por su rendimiento para aumentar la capacidad resistente de dichas estructuras. Si bien el refuerzo a flexión ha sido ampliamente desarrollado y estudiado hasta la fecha, no sucede lo mismo con el refuerzo a cortante, debido principalmente a su gran complejidad. Sin embargo, se debería dedicar más estudio a este tipo de refuerzo si se pretenden conservar los criterios de diseño en estructuras de hormigón armado, los cuales están basados en evitar el fallo a cortante por sus consecuencias catastróficas Esta ausencia de información y de normativa es la que justifica esta tesis doctoral. En este pro-yecto se van a desarrollar dos metodologías alternativas, que permiten estimar la capacidad resistente de vigas de hormigón armado, reforzadas a cortante mediante la técnica FRP-NSM. El primer método aplicado consiste en la implementación de una red neuronal artificial capaz de predecir adecuadamente la resistencia a cortante de vigas reforzadas con este método a partir de experimentos anteriores. Asimismo, a partir de la red se han llevado a cabo algunos estudios a fin de comprender mejor la influencia real de algunos parámetros de la viga y del refuerzo sobre la resistencia a cortante con el propósito de lograr diseños más seguros de este tipo de refuerzo. Una configuración óptima de la red requiere discriminar adecuadamente de entre los numerosos parámetros (geométricos y de material) que pueden influir en el compor-tamiento resistente de la viga, para lo cual se han llevado a cabo diversos estudios y pruebas. Mediante el segundo método, se desarrolla una ecuación de proyecto que permite, de forma sencilla, estimar la capacidad de vigas reforzadas a cortante con FRP-NSM, la cual podría ser propuesta para las principales guías de diseño. Para alcanzar este objetivo, se plantea un pro-blema de optimización multiobjetivo a partir de resultados de ensayos experimentales llevados a cabo sobre vigas de hormigón armado con y sin refuerzo de FRP. El problema multiobjetivo se resuelve mediante algoritmos genéticos, en concreto el algoritmo NSGA-II, por ser más apropiado para problemas con varias funciones objetivo que los métodos de optimización clásicos. Mediante una comparativa de las predicciones realizadas con ambos métodos y de los resulta-dos de ensayos experimentales se podrán establecer las ventajas e inconvenientes derivadas de la aplicación de cada una de las dos metodologías. Asimismo, se llevará a cabo un análisis paramétrico con ambos enfoques a fin de intentar determinar la sensibilidad de aquellos pa-rámetros más sensibles a este tipo de refuerzo. Finalmente, se realizará un análisis estadístico de la fiabilidad de las ecuaciones de diseño deri-vadas de la optimización multiobjetivo. Con dicho análisis se puede estimar la capacidad resis-tente de una viga reforzada a cortante con FRP-NSM dentro de un margen de seguridad espe-cificado a priori. ABSTRACT The use of externally bonded (EB) fibre-reinforced polymer (FRP) composites has gained acceptance during the last two decades in the construction engineering community, particularly in the rehabilitation of reinforced concrete (RC) structures. Currently, to increase the shear resistance of RC beams, FRP sheets are externally bonded (EB-FRP) and applied on the external side surface of the beams to be strengthened with different configurations. Of more recent application, the near-surface mounted FRP bar (NSM-FRP) method is another technique successfully used to increase the shear resistance of RC beams. In the NSM method, FRP rods are embedded into grooves intentionally prepared in the concrete cover of the side faces of RC beams. While flexural strengthening has been widely developed and studied so far, the same doesn´t occur to shearing strength mainly due to its great complexity. Nevertheless, if design criteria are to be preserved more research should be done to this sort of strength, which are based on avoiding shear failure and its catastrophic consequences. However, in spite of this, accurately calculating the shear capacity of FRP shear strengthened RC beams remains a complex challenge that has not yet been fully resolved due to the numerous variables involved in the procedure. The objective of this Thesis is to develop methodologies to evaluate the capacity of FRP shear strengthened RC beams by dealing with the problem from a different point of view to the numerical modeling approach by using artificial intelligence techniques. With this purpose two different approaches have been developed: one concerned with the use of artificial neural networks and the other based on the implementation of an optimization approach developed jointly with the use of artificial neural networks (ANNs) and solved with genetic algorithms (GAs). With these approaches some of the difficulties concerned regarding the numerical modeling can be overcome. As an alternative tool to conventional numerical techniques, neural networks do not provide closed form solutions for modeling problems but do, however, offer a complex and accurate solution based on a representative set of historical examples of the relationship. Furthermore, they can adapt solutions over time to include new data. On the other hand, as a second proposal, an optimization approach has also been developed to implement simple yet accurate shear design equations for this kind of strengthening. This approach is developed in a multi-objective framework by considering experimental results of RC beams with and without NSM-FRP. Furthermore, the results obtained with the previous scheme based on ANNs are also used as a filter to choose the parameters to include in the design equations. Genetic algorithms are used to solve the optimization problem since they are especially suitable for solving multi-objective problems when compared to standard optimization methods. The key features of the two proposed procedures are outlined and their performance in predicting the capacity of NSM-FRP shear strengthened RC beams is evaluated by comparison with results from experimental tests and with predictions obtained using a simplified numerical model. A sensitivity study of the predictions of both models for the input parameters is also carried out.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El objetivo de la tesis es la investigación de algoritmos numéricos para el desarrollo de herramientas numéricas para la simulación de problemas tanto de comportamiento en la mar como de resistencia al avance de buques y estructuras flotantes. La primera herramienta desarrollada resuelve el problema de difracción y radiación de olas. Se basan en el método de los elementos finitos (MEF) para la resolución de la ecuación de Laplace, así como en esquemas basados en MEF, integración a lo largo de líneas de corriente, y en diferencias finitas desarrollados para la condición de superficie libre. Se han desarrollado herramientas numéricas para la resolución de la dinámica de sólido rígido en sistemas multicuerpos con ligaduras. Estas herramientas han sido integradas junto con la herramienta de resolución de olas difractadas y radiadas para la resolución de problemas de interacción de cuerpos con olas. También se han diseñado algoritmos de acoplamientos con otras herramientas numéricas para la resolución de problemas multifísica. En particular, se han realizado acoplamientos con una herramienta numérica basada de cálculo de estructuras con MEF para problemas de interacción fluido-estructura, otra de cálculo de líneas de fondeo, y con una herramienta numérica de cálculo de flujos en tanques internos para problemas acoplados de comportamiento en la mar con “sloshing”. Se han realizado simulaciones numéricas para la validación y verificación de los algoritmos desarrollados, así como para el análisis de diferentes casos de estudio con aplicaciones diversas en los campos de la ingeniería naval, oceánica, y energías renovables marinas. ABSTRACT The objective of this thesis is the research on numerical algorithms to develop numerical tools to simulate seakeeping problems as well as wave resistance problems of ships and floating structures. The first tool developed is a wave diffraction-radiation solver. It is based on the finite element method (FEM) in order to solve the Laplace equation, as well as numerical schemes based on FEM, streamline integration, and finite difference method tailored for solving the free surface boundary condition. It has been developed numerical tools to solve solid body dynamics of multibody systems with body links across them. This tool has been integrated with the wave diffraction-radiation solver to solve wave-body interaction problems. Also it has been tailored coupling algorithms with other numerical tools in order to solve multi-physics problems. In particular, it has been performed coupling with a MEF structural solver to solve fluid-structure interaction problems, with a mooring solver, and with a solver capable of simulating internal flows in tanks to solve couple seakeeping-sloshing problems. Numerical simulations have been carried out to validate and verify the developed algorithms, as well as to analyze case studies in the areas of marine engineering, offshore engineering, and offshore renewable energy.