953 resultados para beam propagation method (BPM)
Resumo:
In this letter , we report a new method for óptical switching based on the electro-optical properties of liquid crystal materials and, in particular, of the nematic type. The basis of this new method is the use of twisted wedge structure that has not been reported before elsewhere. In the past several years , great efforts in integrated optics have been made to develop optical switching devices with fast speed by using electro-optic, acousto-optic or magneto -optic materials. A mechanically operated óptical switch made of grade-index rod 1enses and e1ectromagnets has been proposed too . Switches of this kind include one input and two output waveguides and, depending on the app1ied voltage, one incident light on the switch exits either in one or another of the two output waveguides.
Resumo:
The aim of this work is to present the Exercise I-1b “pin-cell burn-up benchmark” proposed in the framework of OECD LWR UAM. Its objective is to address the uncertainty due to the basic nuclear data as well as the impact of processing the nuclear and covariance data in a pin-cell depletion calculation. Four different sensitivity/uncertainty propagation methodologies participate in this benchmark (GRS, NRG, UPM, and SNU&KAERI). The paper describes the main features of the UPM model (hybrid method) compared with other methodologies. The requested output provided by UPM is presented, and it is discussed regarding the results of other methodologies.
Resumo:
An uncertainty propagation methodology based on Monte Carlo method is applied to PWR nuclear design analysis to assess the impact of nuclear data uncertainties in 235,238 U, 239 Pu and Scattering Thermal Library for Hydrogen in water. This uncertainty analysis is compared with the design and acceptance criteria to assure the adequacy of bounding estimates in safety margins.
Resumo:
Of the many state-of-the-art methods for cooperative localization in wireless sensor networks (WSN), only very few adapt well to mobile networks. The main problems of the well-known algorithms, based on nonparametric belief propagation (NBP), are the high communication cost and inefficient sampling techniques. Moreover, they either do not use smoothing or just apply it o ine. Therefore, in this article, we propose more flexible and effcient variants of NBP for cooperative localization in mobile networks. In particular, we provide: i) an optional 1-lag smoothing done almost in real-time, ii) a novel low-cost communication protocol based on package approximation and censoring, iii) higher robustness of the standard mixture importance sampling (MIS) technique, and iv) a higher amount of information in the importance densities by using the population Monte Carlo (PMC) approach, or an auxiliary variable. Through extensive simulations, we confirmed that all the proposed techniques outperform the standard NBP method.
Resumo:
Non-parametric belief propagation (NBP) is a well-known message passing method for cooperative localization in wireless networks. However, due to the over-counting problem in the networks with loops, NBP’s convergence is not guaranteed, and its estimates are typically less accurate. One solution for this problem is non-parametric generalized belief propagation based on junction tree. However, this method is intractable in large-scale networks due to the high-complexity of the junction tree formation, and the high-dimensionality of the particles. Therefore, in this article, we propose the non-parametric generalized belief propagation based on pseudo-junction tree (NGBP-PJT). The main difference comparing with the standard method is the formation of pseudo-junction tree, which represents the approximated junction tree based on thin graph. In addition, in order to decrease the number of high-dimensional particles, we use more informative importance density function, and reduce the dimensionality of the messages. As by-product, we also propose NBP based on thin graph (NBP-TG), a cheaper variant of NBP, which runs on the same graph as NGBP-PJT. According to our simulation and experimental results, NGBP-PJT method outperforms NBP and NBP-TG in terms of accuracy, computational, and communication cost in reasonably sized networks.
Resumo:
Two design procedures for Radial Line Slot Antennas (RLSAs) with circular polarization and either maximum gain or an arbitrary shaped pattern are proposed. Firstly, a method to design a RLSA with any desired pattern is presented. It is based on an optimization algorithm and some measures to ensure its fast convergence and stability need to be taken. Secondly, a fast technique to calculate the length and the position of every slot in a high gain RLSA with uniform field distribution is described. Both procedures are vali dated with the design of three antennas with different characteristics.
Resumo:
An uncertainty propagation methodology based on the Monte Carlo method is applied to PWR nuclear design analysis to assess the impact of nuclear data uncertainties. The importance of the nuclear data uncertainties for 235,238 U, 239 Pu, and the thermal scattering library for hydrogen in water is analyzed. This uncertainty analysis is compared with the design and acceptance criteria to assure the adequacy of bounding estimates in safety margins.
Resumo:
Una apropiada evaluación de los márgenes de seguridad de una instalación nuclear, por ejemplo, una central nuclear, tiene en cuenta todas las incertidumbres que afectan a los cálculos de diseño, funcionanmiento y respuesta ante accidentes de dicha instalación. Una fuente de incertidumbre son los datos nucleares, que afectan a los cálculos neutrónicos, de quemado de combustible o activación de materiales. Estos cálculos permiten la evaluación de las funciones respuesta esenciales para el funcionamiento correcto durante operación, y también durante accidente. Ejemplos de esas respuestas son el factor de multiplicación neutrónica o el calor residual después del disparo del reactor. Por tanto, es necesario evaluar el impacto de dichas incertidumbres en estos cálculos. Para poder realizar los cálculos de propagación de incertidumbres, es necesario implementar metodologías que sean capaces de evaluar el impacto de las incertidumbres de estos datos nucleares. Pero también es necesario conocer los datos de incertidumbres disponibles para ser capaces de manejarlos. Actualmente, se están invirtiendo grandes esfuerzos en mejorar la capacidad de analizar, manejar y producir datos de incertidumbres, en especial para isótopos importantes en reactores avanzados. A su vez, nuevos programas/códigos están siendo desarrollados e implementados para poder usar dichos datos y analizar su impacto. Todos estos puntos son parte de los objetivos del proyecto europeo ANDES, el cual ha dado el marco de trabajo para el desarrollo de esta tesis doctoral. Por tanto, primero se ha llevado a cabo una revisión del estado del arte de los datos nucleares y sus incertidumbres, centrándose en los tres tipos de datos: de decaimiento, de rendimientos de fisión y de secciones eficaces. A su vez, se ha realizado una revisión del estado del arte de las metodologías para la propagación de incertidumbre de estos datos nucleares. Dentro del Departamento de Ingeniería Nuclear (DIN) se propuso una metodología para la propagación de incertidumbres en cálculos de evolución isotópica, el Método Híbrido. Esta metodología se ha tomado como punto de partida para esta tesis, implementando y desarrollando dicha metodología, así como extendiendo sus capacidades. Se han analizado sus ventajas, inconvenientes y limitaciones. El Método Híbrido se utiliza en conjunto con el código de evolución isotópica ACAB, y se basa en el muestreo por Monte Carlo de los datos nucleares con incertidumbre. En esta metodología, se presentan diferentes aproximaciones según la estructura de grupos de energía de las secciones eficaces: en un grupo, en un grupo con muestreo correlacionado y en multigrupos. Se han desarrollado diferentes secuencias para usar distintas librerías de datos nucleares almacenadas en diferentes formatos: ENDF-6 (para las librerías evaluadas), COVERX (para las librerías en multigrupos de SCALE) y EAF (para las librerías de activación). Gracias a la revisión del estado del arte de los datos nucleares de los rendimientos de fisión se ha identificado la falta de una información sobre sus incertidumbres, en concreto, de matrices de covarianza completas. Además, visto el renovado interés por parte de la comunidad internacional, a través del grupo de trabajo internacional de cooperación para evaluación de datos nucleares (WPEC) dedicado a la evaluación de las necesidades de mejora de datos nucleares mediante el subgrupo 37 (SG37), se ha llevado a cabo una revisión de las metodologías para generar datos de covarianza. Se ha seleccionando la actualización Bayesiana/GLS para su implementación, y de esta forma, dar una respuesta a dicha falta de matrices completas para rendimientos de fisión. Una vez que el Método Híbrido ha sido implementado, desarrollado y extendido, junto con la capacidad de generar matrices de covarianza completas para los rendimientos de fisión, se han estudiado diferentes aplicaciones nucleares. Primero, se estudia el calor residual tras un pulso de fisión, debido a su importancia para cualquier evento después de la parada/disparo del reactor. Además, se trata de un ejercicio claro para ver la importancia de las incertidumbres de datos de decaimiento y de rendimientos de fisión junto con las nuevas matrices completas de covarianza. Se han estudiado dos ciclos de combustible de reactores avanzados: el de la instalación europea para transmutación industrial (EFIT) y el del reactor rápido de sodio europeo (ESFR), en los cuales se han analizado el impacto de las incertidumbres de los datos nucleares en la composición isotópica, calor residual y radiotoxicidad. Se han utilizado diferentes librerías de datos nucleares en los estudios antreriores, comparando de esta forma el impacto de sus incertidumbres. A su vez, mediante dichos estudios, se han comparando las distintas aproximaciones del Método Híbrido y otras metodologías para la porpagación de incertidumbres de datos nucleares: Total Monte Carlo (TMC), desarrollada en NRG por A.J. Koning y D. Rochman, y NUDUNA, desarrollada en AREVA GmbH por O. Buss y A. Hoefer. Estas comparaciones demostrarán las ventajas del Método Híbrido, además de revelar sus limitaciones y su rango de aplicación. ABSTRACT For an adequate assessment of safety margins of nuclear facilities, e.g. nuclear power plants, it is necessary to consider all possible uncertainties that affect their design, performance and possible accidents. Nuclear data are a source of uncertainty that are involved in neutronics, fuel depletion and activation calculations. These calculations can predict critical response functions during operation and in the event of accident, such as decay heat and neutron multiplication factor. Thus, the impact of nuclear data uncertainties on these response functions needs to be addressed for a proper evaluation of the safety margins. Methodologies for performing uncertainty propagation calculations need to be implemented in order to analyse the impact of nuclear data uncertainties. Nevertheless, it is necessary to understand the current status of nuclear data and their uncertainties, in order to be able to handle this type of data. Great eórts are underway to enhance the European capability to analyse/process/produce covariance data, especially for isotopes which are of importance for advanced reactors. At the same time, new methodologies/codes are being developed and implemented for using and evaluating the impact of uncertainty data. These were the objectives of the European ANDES (Accurate Nuclear Data for nuclear Energy Sustainability) project, which provided a framework for the development of this PhD Thesis. Accordingly, first a review of the state-of-the-art of nuclear data and their uncertainties is conducted, focusing on the three kinds of data: decay, fission yields and cross sections. A review of the current methodologies for propagating nuclear data uncertainties is also performed. The Nuclear Engineering Department of UPM has proposed a methodology for propagating uncertainties in depletion calculations, the Hybrid Method, which has been taken as the starting point of this thesis. This methodology has been implemented, developed and extended, and its advantages, drawbacks and limitations have been analysed. It is used in conjunction with the ACAB depletion code, and is based on Monte Carlo sampling of variables with uncertainties. Different approaches are presented depending on cross section energy-structure: one-group, one-group with correlated sampling and multi-group. Differences and applicability criteria are presented. Sequences have been developed for using different nuclear data libraries in different storing-formats: ENDF-6 (for evaluated libraries) and COVERX (for multi-group libraries of SCALE), as well as EAF format (for activation libraries). A revision of the state-of-the-art of fission yield data shows inconsistencies in uncertainty data, specifically with regard to complete covariance matrices. Furthermore, the international community has expressed a renewed interest in the issue through the Working Party on International Nuclear Data Evaluation Co-operation (WPEC) with the Subgroup (SG37), which is dedicated to assessing the need to have complete nuclear data. This gives rise to this review of the state-of-the-art of methodologies for generating covariance data for fission yields. Bayesian/generalised least square (GLS) updating sequence has been selected and implemented to answer to this need. Once the Hybrid Method has been implemented, developed and extended, along with fission yield covariance generation capability, different applications are studied. The Fission Pulse Decay Heat problem is tackled first because of its importance during events after shutdown and because it is a clean exercise for showing the impact and importance of decay and fission yield data uncertainties in conjunction with the new covariance data. Two fuel cycles of advanced reactors are studied: the European Facility for Industrial Transmutation (EFIT) and the European Sodium Fast Reactor (ESFR), and response function uncertainties such as isotopic composition, decay heat and radiotoxicity are addressed. Different nuclear data libraries are used and compared. These applications serve as frameworks for comparing the different approaches of the Hybrid Method, and also for comparing with other methodologies: Total Monte Carlo (TMC), developed at NRG by A.J. Koning and D. Rochman, and NUDUNA, developed at AREVA GmbH by O. Buss and A. Hoefer. These comparisons reveal the advantages, limitations and the range of application of the Hybrid Method.
Resumo:
The solution to the problem of finding the optimum mesh design in the finite element method with the restriction of a given number of degrees of freedom, is an interesting problem, particularly in the applications method. At present, the usual procedures introduce new degrees of freedom (remeshing) in a given mesh in order to obtain a more adequate one, from the point of view of the calculation results (errors uniformity). However, from the solution of the optimum mesh problem with a specific number of degrees of freedom some useful recommendations and criteria for the mesh construction may be drawn. For 1-D problems, namely for the simple truss and beam elements, analytical solutions have been found and they are given in this paper. For the more complex 2-D problems (plane stress and plane strain) numerical methods to obtain the optimum mesh, based on optimization procedures have to be used. The objective function, used in the minimization process, has been the total potential energy. Some examples are presented. Finally some conclusions and hints about the possible new developments of these techniques are also given.
Resumo:
A novel slow push asteroid deflection strategy has been recently proposed in which an Earth threatening asteroid can be deflected by exploiting the momentum transmitted by a collimated beam of quasi-neutral plasma impinging against the asteroid surface. The beam can be generated with state-of-the art ion engines from a hovering spacecraft with no need for physical attachment or gravitational interaction with the celestial body. The spacecraft, placed at a distance of a few asteroid diameters, would need an ion thruster pointed at the asteroid surface as well as a second propulsion system to compensate for the ion engine reaction and keep the distance between the asteroid and the shepherd satellite constant throughout the deflection phase. A comparison in terms of required spacecraft mass per total imparted deflection impulse shows that the method outperforms the gravity tractor concept by more than one order of magnitude for asteroids up to about 200 m diameter. The two methods would yield comparable performance for asteroids larger than about 2 km
Resumo:
El estudio desarrollado en este trabajo de tesis se centra en la modelización numérica de la fase de propagación de los deslizamientos rápidos de ladera a través del método sin malla Smoothed Particle Hydrodynamics (SPH). Este método tiene la gran ventaja de permitir el análisis de problemas de grandes deformaciones evitando operaciones costosas de remallado como en el caso de métodos numéricos con mallas tal como el método de los Elementos Finitos. En esta tesis, particular atención viene dada al rol que la reología y la presión de poros desempeñan durante estos eventos. El modelo matemático utilizado se basa en la formulación de Biot-Zienkiewicz v - pw, que representa el comportamiento, expresado en términos de velocidad del esqueleto sólido y presiones de poros, de la mezcla de partículas sólidas en un medio saturado. Las ecuaciones que gobiernan el problema son: • la ecuación de balance de masa de la fase del fluido intersticial, • la ecuación de balance de momento de la fase del fluido intersticial y de la mezcla, • la ecuación constitutiva y • una ecuación cinemática. Debido a sus propiedades geométricas, los deslizamientos de ladera se caracterizan por tener una profundidad muy pequeña frente a su longitud y a su anchura, y, consecuentemente, el modelo matemático mencionado anteriormente se puede simplificar integrando en profundidad las ecuaciones, pasando de un modelo 3D a 2D, el cual presenta una combinación excelente de precisión, sencillez y costes computacionales. El modelo propuesto en este trabajo se diferencia de los modelos integrados en profundidad existentes por incorporar un ulterior modelo capaz de proveer información sobre la presión del fluido intersticial a cada paso computacional de la propagación del deslizamiento. En una manera muy eficaz, la evolución de los perfiles de la presión de poros está numéricamente resuelta a través de un esquema explicito de Diferencias Finitas a cada nodo SPH. Este nuevo enfoque es capaz de tener en cuenta la variación de presión de poros debida a cambios de altura, de consolidación vertical o de cambios en las tensiones totales. Con respecto al comportamiento constitutivo, uno de los problemas principales al modelizar numéricamente deslizamientos rápidos de ladera está en la dificultad de simular con la misma ley constitutiva o reológica la transición de la fase de iniciación, donde el material se comporta como un sólido, a la fase de propagación donde el material se comporta como un fluido. En este trabajo de tesis, se propone un nuevo modelo reológico basado en el modelo viscoplástico de Perzyna, pensando a la viscoplasticidad como a la llave para poder simular tanto la fase de iniciación como la de propagación con el mismo modelo constitutivo. Con el fin de validar el modelo matemático y numérico se reproducen tanto ejemplos de referencia con solución analítica como experimentos de laboratorio. Finalmente, el modelo se aplica a casos reales, con especial atención al caso del deslizamiento de 1966 en Aberfan, mostrando como los resultados obtenidos simulan con éxito estos tipos de riesgos naturales. The study developed in this thesis focuses on the modelling of landslides propagation with the Smoothed Particle Hydrodynamics (SPH) meshless method which has the great advantage of allowing to deal with large deformation problems by avoiding expensive remeshing operations as happens for mesh methods such as, for example, the Finite Element Method. In this thesis, special attention is given to the role played by rheology and pore water pressure during these natural hazards. The mathematical framework used is based on the v - pw Biot-Zienkiewicz formulation, which represents the behaviour, formulated in terms of soil skeleton velocity and pore water pressure, of the mixture of solid particles and pore water in a saturated media. The governing equations are: • the mass balance equation for the pore water phase, • the momentum balance equation for the pore water phase and the mixture, • the constitutive equation and • a kinematic equation. Landslides, due to their shape and geometrical properties, have small depths in comparison with their length or width, therefore, the mathematical model aforementioned can then be simplified by depth integrating the equations, switching from a 3D to a 2D model, which presents an excellent combination of accuracy, computational costs and simplicity. The proposed model differs from previous depth integrated models by including a sub-model able to provide information on pore water pressure profiles at each computational step of the landslide's propagation. In an effective way, the evolution of the pore water pressure profiles is numerically solved through a set of 1D Finite Differences explicit scheme at each SPH node. This new approach is able to take into account the variation of the pore water pressure due to changes of height, vertical consolidation or changes of total stress. Concerning the constitutive behaviour, one of the main issues when modelling fast landslides is the difficulty to simulate with the same constitutive or rheological model the transition from the triggering phase, where the landslide behaves like a solid, to the propagation phase, where the landslide behaves in a fluid-like manner. In this work thesis, a new rheological model is proposed, based on the Perzyna viscoplastic model, thinking of viscoplasticity as the key to close the gap between the triggering and the propagation phase. In order to validate the mathematical model and the numerical approach, benchmarks and laboratory experiments are reproduced and compared to analytical solutions when possible. Finally, applications to real cases are studied, with particular attention paid to the Aberfan flowslide of 1966, showing how the mathematical model accurately and successfully simulate these kind of natural hazards.
Resumo:
This paper proposes an extension of methods used to predict the propagation of landslides having a long runout to smaller landslides with much shorter propagation distances. The method is based on: (1) a depth-integrated mathematical model including the coupling between the soil skeleton and the pore fluids, (2) suitable rheological models describing the relation between the stress and the rate of deformation tensors for fluidised soils and (3) a meshless numerical method, Smooth Particle Hydrodynamics, which separates the computational mesh (or set of computational nodes) from the mesh describing the terrain topography, which is of structured type – thus accelerating search operations. The proposed model is validated using two examples for which there are analytical solutions, and then it is applied to two short runout landslides which happened in Hong Kong in 1995, for which there is available information.
Resumo:
A consistent Finite Element formulation was developed for four classical 1-D beam models. This formulation is based upon the solution of the homogeneous differential equation (or equations) associated with each model. Results such as the shape functions, stiffness matrices and consistent force vectors for the constant section beam were found. Some of these results were compared with the corresponding ones obtained by the standard Finite Element Method (i.e. using polynomial expansions for the field variables). Some of the difficulties reported in the literature concerning some of these models may be avoided by this technique and some numerical sensitivity analysis on this subject are presented.
Resumo:
Motivado por los últimos hallazgos realizados gracias a los recientes avances tecnológicos y misiones espaciales, el estudio de los asteroides ha despertado el interés de la comunidad científica. Tal es así que las misiones a asteroides han proliferado en los últimos años (Hayabusa, Dawn, OSIRIX-REx, ARM, AIMS-DART, ...) incentivadas por su enorme interés científico. Los asteroides son constituyentes fundamentales en la evolución del Sistema Solar, son además grandes concentraciones de valiosos recursos naturales, y también pueden considerarse como objectivos estratégicos para la futura exploración espacial. Desde hace tiempo se viene especulando con la posibilidad de capturar objetos próximos a la Tierra (NEOs en su acrónimo anglosajón) y acercarlos a nuestro planeta, permitiendo así un acceso asequible a los mismos para estudiarlos in-situ, explotar sus recursos u otras finalidades. Por otro lado, las asteroides se consideran con frecuencia como posibles peligros de magnitud planetaria, ya que impactos de estos objetos con la Tierra suceden constantemente, y un asteroide suficientemente grande podría desencadenar eventos catastróficos. Pese a la gravedad de tales acontecimientos, lo cierto es que son ciertamente difíciles de predecir. De hecho, los ricos aspectos dinámicos de los asteroides, su modelado complejo y las incertidumbres observaciones hacen que predecir su posición futura con la precisión necesaria sea todo un reto. Este hecho se hace más relevante cuando los asteroides sufren encuentros próximos con la Tierra, y más aún cuando estos son recurrentes. En tales situaciones en las cuales fuera necesario tomar medidas para mitigar este tipo de riesgos, saber estimar con precisión sus trayectorias y probabilidades de colisión es de una importancia vital. Por ello, se necesitan herramientas avanzadas para modelar su dinámica y predecir sus órbitas con precisión, y son también necesarios nuevos conceptos tecnológicos para manipular sus órbitas llegado el caso. El objetivo de esta Tesis es proporcionar nuevos métodos, técnicas y soluciones para abordar estos retos. Las contribuciones de esta Tesis se engloban en dos áreas: una dedicada a la propagación numérica de asteroides, y otra a conceptos de deflexión y captura de asteroides. Por lo tanto, la primera parte de este documento presenta novedosos avances de apliación a la propagación dinámica de alta precisión de NEOs empleando métodos de regularización y perturbaciones, con especial énfasis en el método DROMO, mientras que la segunda parte expone ideas innovadoras para la captura de asteroides y comenta el uso del “ion beam shepherd” (IBS) como tecnología para deflectarlos. Abstract Driven by the latest discoveries enabled by recent technological advances and space missions, the study of asteroids has awakened the interest of the scientific community. In fact, asteroid missions have become very popular in the recent years (Hayabusa, Dawn, OSIRIX-REx, ARM, AIMS-DART, ...) motivated by their outstanding scientific interest. Asteroids are fundamental constituents in the evolution of the Solar System, can be seen as vast concentrations of valuable natural resources, and are also considered as strategic targets for the future of space exploration. For long it has been hypothesized with the possibility of capturing small near-Earth asteroids and delivering them to the vicinity of the Earth in order to allow an affordable access to them for in-situ science, resource utilization and other purposes. On the other side of the balance, asteroids are often seen as potential planetary hazards, since impacts with the Earth happen all the time, and eventually an asteroid large enough could trigger catastrophic events. In spite of the severity of such occurrences, they are also utterly hard to predict. In fact, the rich dynamical aspects of asteroids, their complex modeling and observational uncertainties make exceptionally challenging to predict their future position accurately enough. This becomes particularly relevant when asteroids exhibit close encounters with the Earth, and more so when these happen recurrently. In such situations, where mitigation measures may need to be taken, it is of paramount importance to be able to accurately estimate their trajectories and collision probabilities. As a consequence, advanced tools are needed to model their dynamics and accurately predict their orbits, as well as new technological concepts to manipulate their orbits if necessary. The goal of this Thesis is to provide new methods, techniques and solutions to address these challenges. The contributions of this Thesis fall into two areas: one devoted to the numerical propagation of asteroids, and another to asteroid deflection and capture concepts. Hence, the first part of the dissertation presents novel advances applicable to the high accuracy dynamical propagation of near-Earth asteroids using regularization and perturbations techniques, with a special emphasis in the DROMO method, whereas the second part exposes pioneering ideas for asteroid retrieval missions and discusses the use of an “ion beam shepherd” (IBS) for asteroid deflection purposes.
Resumo:
The city of Lorca (Spain) was hit on May 11th, 2011, by two consecutive earth-quakes of magnitudes 4.6 and 5.2 Mw, causing casualties and important damage in buildings. Many of the damaged structures were reinforced concrete frames with wide beams. This study quantifies the expected level of damage on this structural type in the case of the Lorca earth-quake by means of a seismic index Iv that compares the energy input by the earthquake with the energy absorption/dissipation capacity of the structure. The prototype frames investigated represent structures designed in two time periods (1994–2002 and 2003–2008), in which the applicable codes were different. The influence of the masonry infill walls and the proneness of the frames to concentrate damage in a given story were further investigated through nonlinear dynamic response analyses. It is found that (1) the seismic index method predicts levels of damage that range from moderate/severe to complete collapse; this prediction is consistent with the observed damage; (2) the presence of masonry infill walls makes the structure very prone to damage concentration and reduces the overall seismic capacity of the building; and (3) a proper hierarchy of strength between beams and columns that guarantees the formation of a strong column-weak beam mechanism (as prescribed by seismic codes), as well as the adoption of counter-measures to avoid the negative interaction between non-structural infill walls and the main frame, would have reduced the level of damage from Iv=1 (collapse) to about Iv=0.5 (moderate/severe damage)