10 resultados para Revision of curriculum
em Universidad Politécnica de Madrid
Resumo:
This review of Electromagnetic Band Gap (EGB) metamaterials and steering integrated antennas was carried out in IMST GmbH under a short collaboration stay. This activity is in line with Coordinating the Antenna Research in Europe (CARE). The aim is to identify the newest trends, and suggest novel solutions and design methodologies for various applications.
Resumo:
In this paper we present a global overview of the recent study carried out in Spain for the new hazard map, which final goal is the revision of the Building Code in our country (NCSE-02). The study was carried our for a working group joining experts from The Instituto Geografico Nacional (IGN) and the Technical University of Madrid (UPM) , being the different phases of the work supervised by an expert Committee integrated by national experts from public institutions involved in subject of seismic hazard. The PSHA method (Probabilistic Seismic Hazard Assessment) has been followed, quantifying the epistemic uncertainties through a logic tree and the aleatory ones linked to variability of parameters by means of probability density functions and Monte Carlo simulations. In a first phase, the inputs have been prepared, which essentially are: 1) a project catalogue update and homogenization at Mw 2) proposal of zoning models and source characterization 3) calibration of Ground Motion Prediction Equations (GMPE’s) with actual data and development of a local model with data collected in Spain for Mw < 5.5. In a second phase, a sensitivity analysis of the different input options on hazard results has been carried out in order to have criteria for defining the branches of the logic tree and their weights. Finally, the hazard estimation was done with the logic tree shown in figure 1, including nodes for quantifying uncertainties corresponding to: 1) method for estimation of hazard (zoning and zoneless); 2) zoning models, 3) GMPE combinations used and 4) regression method for estimation of source parameters. In addition, the aleatory uncertainties corresponding to the magnitude of the events, recurrence parameters and maximum magnitude for each zone have been also considered including probability density functions and Monte Carlo simulations The main conclusions of the study are presented here, together with the obtained results in terms of PGA and other spectral accelerations SA (T) for return periods of 475, 975 and 2475 years. The map of the coefficient of variation (COV) are also represented to give an idea of the zones where the dispersion among results are the highest and the zones where the results are robust.
Resumo:
During sentence processing there is a preference to treat the first noun phrase found as the subject and agent, unless marked the other way. This preference would lead to a conflict in thematic role assignment when the syntactic structure conforms to a non-canonical object-before-subject pattern. Left perisylvian and fronto-parietal brain networks have been found to be engaged by increased computational demands during sentence comprehension, while event-reated brain potentials have been used to study the on-line manifestation of these demands. However, evidence regarding the spatiotemporal organization of brain networks in this domain is scarce. In the current study we used Magnetoencephalography to track spatio-temporally brain activity while Spanish speakers were reading subject- and object-first cleft sentences. Both kinds of sentences remained ambiguous between a subject-first or an object-first interpretation up to the appearance of the second argument. Results show the time-modulation of a frontal network at the disambiguation point of object-first sentences. Moreover, the time windows where these effects took place have been previously related to thematic role integration (300–500 ms) and to sentence reanalysis and resolution of conflicts during processing (beyond 500 ms post-stimulus). These results point to frontal cognitive control as a putative key mechanism which may operate when a revision of the sentence structure and meaning is necessary
Resumo:
An evaluation of the seismic hazard in La Hispaniola Island has been carried out, as part of the cooperative project SISMO-HAITI, supported by the Technical University of Madrid (UPM) and developed by several Spanish Universities, the National Observatory of Environment and Vulnerability) ONEV of Haiti, and with contributions from the Puerto Rico Seismic Network (PRSN) and University Seismological Institute of Dominican Republic (ISU). The study was aimed at obtaining results suitable for seismic design purposes. It started with the elaboration of a seismic catalogue for the Hispaniola Island, requiring an exhaustive revision of data reported by more than 20 seismic agencies, apart from these from the PRSN and ISU. The final catalogue contains 96 historical earthquakes and 1690 instrumental events, and it was homogenized to moment magnitude, Mw. Seismotectonic models proposed for the region were revised and a new regional zonation was proposed, taking into account geological andtectonic data, seismicity, focal mechanisms, and GPS observations. In parallel, attenuation models for subduction and crustal zones were revised in previous projects and the most suitable for the Caribbean plate were selected. Then, a seismic hazard analysis was developed in terms of peak ground acceleration, PGA, and spectral accelerations, SA (T), for periods of 0.1, 0.2, 0.5, 1 and 2s, using the Probabilistic Seismic Hazard Assessment (PSHA) methodology. As a result, different hazard maps were obtained for the quoted parameters, together with Uniform Hazard Spectra for Port au Prince and the main cities in the country. Hazard deaggregation was also carried out in these towns, for the target motion given by the PGA and SA (1s) obtained for return periods of 475, 975 and 2475 years. Therefore, the controlling earthquakes for short- and long-period target motions were derived. This study was started a few months after the 2010 earthquake, as a response to an aid request from the Haitian government to the UPM, and the results are available for the definition of the first building code in Haiti.
Resumo:
Sustaining irrigated agriculture to meet food production needs while maintaining aquatic ecosystems is at the heart of many policy debates in various parts of the world, especially in arid and semi-arid areas. Researchers and practitioners are increasingly calling for integrated approaches, and policy-makers are progressively supporting the inclusion of ecological and social aspects in water management programs. This paper contributes to this policy debate by providing an integrated economic-hydrologic modeling framework that captures the socio-economic and environmental effects of various policy initiatives and climate variability. This modeling integration includes a risk-based economic optimization model and a hydrologic water management simulation model that have been specified for the Middle Guadiana basin, a vulnerable drought-prone agro-ecological area with highly regulated river systems in southwest Spain. Namely, two key water policy interventions were investigated: the implementation of minimum environmental flows (supported by the European Water Framework Directive, EU WFD), and a reduction in the legal amount of water delivered for irrigation (planned measure included in the new Guadiana River Basin Management Plan, GRBMP, still under discussion). Results indicate that current patterns of excessive water use for irrigation in the basin may put environmental flow demands at risk, jeopardizing the WFD s goal of restoring the ?good ecological status? of water bodies by 2015. Conflicts between environmental and agricultural water uses will be stressed during prolonged dry episodes, and particularly in summer low-flow periods, when there is an important increase of crop irrigation water requirements. Securing minimum stream flows would entail a substantial reduction in irrigation water use for rice cultivation, which might affect the profitability and economic viability of small rice-growing farms located upstream in the river. The new GRBMP could contribute to balance competing water demands in the basin and to increase economic water productivity, but might not be sufficient to ensure the provision of environmental flows as required by the WFD. A thoroughly revision of the basin s water use concession system for irrigation seems to be needed in order to bring the GRBMP in line with the WFD objectives. Furthermore, the study illustrates that social, economic, institutional, and technological factors, in addition to bio-physical conditions, are important issues to be considered for designing and developing water management strategies. The research initiative presented in this paper demonstrates that hydro-economic models can explicitly integrate all these issues, constituting a valuable tool that could assist policy makers for implementing sustainable irrigation policies.
Resumo:
Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.
Resumo:
The design of containment walls suffering seismic loads traditionally has been realized with methods based on pseudoanalitic procedures such as Mononobe-Okabe's method, which it has led in certain occasions to insecure designs, that they have produced the ruin of many containment walls suffering the action of an earthquake. The recommendations gathered in Mononobe-Okabe's theory have been included in numerous Codes of Seismic Design. It is clear that a revision of these recommendations must be done. At present there is taking place an important review of the design methods of anti-seismic structures such as containment walls placed in an area of numerous earthquakes, by means of the introduction at the beginning of the decade of 1990 the Displacement Response Spectrum (DRS) and the Capacity Demand Diagram (CDD) that suppose an important change in the way of presenting the Elastic Response Spectrum (ERS). On the other hand in case of action of an earthquake, the dynamic characteristics of a soil have been referred traditionally to the speed of the shear waves that can be generated in a site, together with the characteristics of plasticity and damping of the soil. The Principle of the energy conservation explains why a shear upward propagating seismic wave can be amplified when travelling from a medium with high shear wave velocity (rock) to other medium with lower velocity (soil deposit), as it happened in the earthquake of Mexico of 1985. This amplification is a function of the speed gradient or of the contrast of impedances in the border of both types of mediums. A method is proposed in this paper for the design of containment walls in different soils, suffering to the action of an earthquake, based on the Performance-Based Seismic Design.
Resumo:
The availability of electronic health data favors scientific advance through the creation of repositories for secondary use. Data anonymization is a mandatory step to comply with current legislation. A service for the pseudonymization of electronic healthcare record (EHR) extracts aimed at facilitating the exchange of clinical information for secondary use in compliance with legislation on data protection is presented. According to ISO/TS 25237, pseudonymization is a particular type of anonymization. This tool performs the anonymizations by maintaining three quasi-identifiers (gender, date of birth and place of residence) with a degree of specification selected by the user. The developed system is based on the ISO/EN 13606 norm using its characteristics specifically favorable for anonymization. The service is made up of two independent modules: the demographic server and the pseudonymizing module. The demographic server supports the permanent storage of the demographic entities and the management of the identifiers. The pseudonymizing module anonymizes the ISO/EN 13606 extracts. The pseudonymizing process consists of four phases: the storage of the demographic information included in the extract, the substitution of the identifiers, the elimination of the demographic information of the extract and the elimination of key data in free-text fields. The described pseudonymizing system was used in three Telemedicine research projects with satisfactory results. A problem was detected with the type of data in a demographic data field and a proposal for modification was prepared for the group in charge of the drawing up and revision of the ISO/EN 13606 norm.
Resumo:
Spanish coastal legislation has changed in response to changing circumstances. The objective of the 1969 Spanish Coastal Law was to assign responsibilities in the Public Domain to the authorities. The 1980 Spanish Coastal Law addressed infractions and sanctions issues. The 1988 Spanish Coastal Law completed the responsibilities and sanctions aspects and included others related to the delimitation of the Public Domain, the private properties close to the Public Domain, and limitations on landuse in this area. The 1988 Spanish Coastal Law has been controversial since its publication. The “European Parliament Report on the impact of extensive urbanization in Spain on individual rights of European citizen, on the environment and on the application of EU law, based upon petitions received”, published in 2009 recommended that the Spanish Authorities make an urgent revision of the Coastal Law with the main objective of protecting property owners whose buildings do not have negative effects on the coastal environment. The revision recommended has been carried out, in the new Spanish Coastal Law “Ley 2/2013, de 29 de mayo, de protección y uso sostenible del litoral y de modificación de la Ley 22/1988, de 28 de Julio, de Costas”, published in May of 2013. This is the first major change in the 25 years since the previous 1988 Spanish Coastal Law. This paper compares the 1988 and 2013 Spanish Coastal Law documents, highlighting the most important issues like the Public Domain description, limitations in private properties close to the Public Domain limit, climate change influence, authorizations length, etc. The paper includes proposals for further improvements.
Resumo:
Esta Tesis tiene como objetivo principal el desarrollo de métodos de identificación del daño que sean robustos y fiables, enfocados a sistemas estructurales experimentales, fundamentalmente a las estructuras de hormigón armado reforzadas externamente con bandas fibras de polímeros reforzados (FRP). El modo de fallo de este tipo de sistema estructural es crítico, pues generalmente es debido a un despegue repentino y frágil de la banda del refuerzo FRP originado en grietas intermedias causadas por la flexión. La detección de este despegue en su fase inicial es fundamental para prevenir fallos futuros, que pueden ser catastróficos. Inicialmente, se lleva a cabo una revisión del método de la Impedancia Electro-Mecánica (EMI), de cara a exponer sus capacidades para la detección de daño. Una vez la tecnología apropiada es seleccionada, lo que incluye un analizador de impedancias así como novedosos sensores PZT para monitorización inteligente, se ha diseñado un procedimiento automático basado en los registros de impedancias de distintas estructuras de laboratorio. Basándonos en el hecho de que las mediciones de impedancias son posibles gracias a una colocación adecuada de una red de sensores PZT, la estimación de la presencia de daño se realiza analizando los resultados de distintos indicadores de daño obtenidos de la literatura. Para que este proceso sea automático y que no sean necesarios conocimientos previos sobre el método EMI para realizar un experimento, se ha diseñado e implementado un Interfaz Gráfico de Usuario, transformando la medición de impedancias en un proceso fácil e intuitivo. Se evalúa entonces el daño a través de los correspondientes índices de daño, intentando estimar no sólo su severidad, sino también su localización aproximada. El desarrollo de estos experimentos en cualquier estructura genera grandes cantidades de datos que han de ser procesados, y algunas veces los índices de daño no son suficientes para una evaluación completa de la integridad de una estructura. En la mayoría de los casos se pueden encontrar patrones de daño en los datos, pero no se tiene información a priori del estado de la estructura. En este punto, se ha hecho una importante investigación en técnicas de reconocimiento de patrones particularmente en aprendizaje no supervisado, encontrando aplicaciones interesantes en el campo de la medicina. De ahí surge una idea creativa e innovadora: detectar y seguir la evolución del daño en distintas estructuras como si se tratase de un cáncer propagándose por el cuerpo humano. En ese sentido, las lecturas de impedancias se emplean como información intrínseca de la salud de la propia estructura, de forma que se pueden aplicar las mismas técnicas que las empleadas en la investigación del cáncer. En este caso, se ha aplicado un algoritmo de clasificación jerárquica dado que ilustra además la clasificación de los datos de forma gráfica, incluyendo información cualitativa y cuantitativa sobre el daño. Se ha investigado la efectividad de este procedimiento a través de tres estructuras de laboratorio, como son una viga de aluminio, una unión atornillada de aluminio y un bloque de hormigón reforzado con FRP. La primera ayuda a mostrar la efectividad del método en sencillos escenarios de daño simple y múltiple, de forma que las conclusiones extraídas se aplican sobre los otros dos, diseñados para simular condiciones de despegue en distintas estructuras. Demostrada la efectividad del método de clasificación jerárquica de lecturas de impedancias, se aplica el procedimiento sobre las estructuras de hormigón armado reforzadas con bandas de FRP objeto de esta tesis, detectando y clasificando cada estado de daño. Finalmente, y como alternativa al anterior procedimiento, se propone un método para la monitorización continua de la interfase FRP-Hormigón, a través de una red de sensores FBG permanentemente instalados en dicha interfase. De esta forma, se obtienen medidas de deformación de la interfase en condiciones de carga continua, para ser implementadas en un modelo de optimización multiobjetivo, cuya solución se haya por medio de una expansión multiobjetivo del método Particle Swarm Optimization (PSO). La fiabilidad de este último método de detección se investiga a través de sendos ejemplos tanto numéricos como experimentales. ABSTRACT This thesis aims to develop robust and reliable damage identification methods focused on experimental structural systems, in particular Reinforced Concrete (RC) structures externally strengthened with Fiber Reinforced Polymers (FRP) strips. The failure mode of this type of structural system is critical, since it is usually due to sudden and brittle debonding of the FRP reinforcement originating from intermediate flexural cracks. Detection of the debonding in its initial stage is essential thus to prevent future failure, which might be catastrophic. Initially, a revision of the Electro-Mechanical Impedance (EMI) method is carried out, in order to expose its capabilities for local damage detection. Once the appropriate technology is selected, which includes impedance analyzer as well as novel PZT sensors for smart monitoring, an automated procedure has been design based on the impedance signatures of several lab-scale structures. On the basis that capturing impedance measurements is possible thanks to an adequately deployed PZT sensor network, the estimation of damage presence is done by analyzing the results of different damage indices obtained from the literature. In order to make this process automatic so that it is not necessary a priori knowledge of the EMI method to carry out an experimental test, a Graphical User Interface has been designed, turning the impedance measurements into an easy and intuitive procedure. Damage is then assessed through the analysis of the corresponding damage indices, trying to estimate not only the damage severity, but also its approximate location. The development of these tests on any kind of structure generates large amounts of data to be processed, and sometimes the information provided by damage indices is not enough to achieve a complete analysis of the structural health condition. In most of the cases, some damage patterns can be found in the data, but none a priori knowledge of the health condition is given for any structure. At this point, an important research on pattern recognition techniques has been carried out, particularly on unsupervised learning techniques, finding interesting applications in the medicine field. From this investigation, a creative and innovative idea arose: to detect and track the evolution of damage in different structures, as if it were a cancer propagating through a human body. In that sense, the impedance signatures are used to give intrinsic information of the health condition of the structure, so that the same clustering algorithms applied in the cancer research can be applied to the problem addressed in this dissertation. Hierarchical clustering is then applied since it also provides a graphical display of the clustered data, including quantitative and qualitative information about damage. The performance of this approach is firstly investigated using three lab-scale structures, such as a simple aluminium beam, a bolt-jointed aluminium beam and an FRP-strengthened concrete specimen. The first one shows the performance of the method on simple single and multiple damage scenarios, so that the first conclusions can be extracted and applied to the other two experimental tests, which are designed to simulate a debonding condition on different structures. Once the performance of the impedance-based hierarchical clustering method is proven to be successful, it is then applied to the structural system studied in this dissertation, the RC structures externally strengthened with FRP strips, where the debonding failure in the interface between the FRP and the concrete is successfully detected and classified, proving thus the feasibility of this method. Finally, as an alternative to the previous approach, a continuous monitoring procedure of the FRP-Concrete interface is proposed, based on an FBGsensors Network permanently deployed within that interface. In this way, strain measurements can be obtained under controlled loading conditions, and then they are used in order to implement a multi-objective model updating method solved by a multi-objective expansion of the Particle Swarm Optimization (PSO) method. The feasibility of this last proposal is investigated and successfully proven on both numerical and experimental RC beams strengthened with FRP.