10 resultados para Radiotherapy Planning, Computer-Assisted

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

La planificación pre-operatoria se ha convertido en una tarea esencial en cirugías y terapias de marcada complejidad, especialmente aquellas relacionadas con órgano blando. Un ejemplo donde la planificación preoperatoria tiene gran interés es la cirugía hepática. Dicha planificación comprende la detección e identificación precisa de las lesiones individuales y vasos así como la correcta segmentación y estimación volumétrica del hígado funcional. Este proceso es muy importante porque determina tanto si el paciente es un candidato adecuado para terapia quirúrgica como la definición del abordaje a seguir en el procedimiento. La radioterapia de órgano blando es un segundo ejemplo donde la planificación se requiere tanto para la radioterapia externa convencional como para la radioterapia intraoperatoria. La planificación comprende la segmentación de tumor y órganos vulnerables y la estimación de la dosimetría. La segmentación de hígado funcional y la estimación volumétrica para planificación de la cirugía se estiman habitualmente a partir de imágenes de tomografía computarizada (TC). De igual modo, en la planificación de radioterapia, los objetivos de la radiación se delinean normalmente sobre TC. Sin embargo, los avances en las tecnologías de imagen de resonancia magnética (RM) están ofreciendo progresivamente ventajas adicionales. Por ejemplo, se ha visto que el ratio de detección de metástasis hepáticas es significativamente superior en RM con contraste Gd–EOB–DTPA que en TC. Por tanto, recientes estudios han destacado la importancia de combinar la información de TC y RM para conseguir el mayor nivel posible de precisión en radioterapia y para facilitar una descripción precisa de las lesiones del hígado. Con el objetivo de mejorar la planificación preoperatoria en ambos escenarios se precisa claramente de un algoritmo de registro no rígido de imagen. Sin embargo, la gran mayoría de sistemas comerciales solo proporcionan métodos de registro rígido. Las medidas de intensidad de voxel han demostrado ser criterios de similitud de imágenes robustos, y, entre ellas, la Información Mutua (IM) es siempre la primera elegida en registros multimodales. Sin embargo, uno de los principales problemas de la IM es la ausencia de información espacial y la asunción de que las relaciones estadísticas entre las imágenes son homogéneas a lo largo de su domino completo. La hipótesis de esta tesis es que la incorporación de información espacial de órganos al proceso de registro puede mejorar la robustez y calidad del mismo, beneficiándose de la disponibilidad de las segmentaciones clínicas. En este trabajo, se propone y valida un esquema de registro multimodal no rígido 3D usando una nueva métrica llamada Información Mutua Centrada en el Órgano (Organ-Focused Mutual Information metric (OF-MI)) y se compara con la formulación clásica de la Información Mutua. Esto permite mejorar los resultados del registro en áreas problemáticas incorporando información regional al criterio de similitud, beneficiándose de la disponibilidad real de segmentaciones en protocolos estándares clínicos, y permitiendo que la dependencia estadística entre las dos modalidades de imagen difiera entre órganos o regiones. El método propuesto se ha aplicado al registro de TC y RM con contraste Gd–EOB–DTPA así como al registro de imágenes de TC y MR para planificación de radioterapia intraoperatoria rectal. Adicionalmente, se ha desarrollado un algoritmo de apoyo de segmentación 3D basado en Level-Sets para la incorporación de la información de órgano en el registro. El algoritmo de segmentación se ha diseñado específicamente para la estimación volumétrica de hígado sano funcional y ha demostrado un buen funcionamiento en un conjunto de imágenes de TC abdominales. Los resultados muestran una mejora estadísticamente significativa de OF-MI comparada con la Información Mutua clásica en las medidas de calidad de los registros; tanto con datos simulados (p<0.001) como con datos reales en registro hepático de TC y RM con contraste Gd– EOB–DTPA y en registro para planificación de radioterapia rectal usando OF-MI multi-órgano (p<0.05). Adicionalmente, OF-MI presenta resultados más estables con menor dispersión que la Información Mutua y un comportamiento más robusto con respecto a cambios en la relación señal-ruido y a la variación de parámetros. La métrica OF-MI propuesta en esta tesis presenta siempre igual o mayor precisión que la clásica Información Mutua y consecuentemente puede ser una muy buena alternativa en aplicaciones donde la robustez del método y la facilidad en la elección de parámetros sean particularmente importantes. Abstract Pre-operative planning has become an essential task in complex surgeries and therapies, especially for those affecting soft tissue. One example where soft tissue preoperative planning is of high interest is liver surgery. It involves the accurate detection and identification of individual liver lesions and vessels as well as the proper functional liver segmentation and volume estimation. This process is very important because it determines whether the patient is a suitable candidate for surgical therapy and the type of procedure. Soft tissue radiation therapy is a second example where planning is required for both conventional external and intraoperative radiotherapy. It involves the segmentation of the tumor target and vulnerable organs and the estimation of the planned dose. Functional liver segmentations and volume estimations for surgery planning are commonly estimated from computed tomography (CT) images. Similarly, in radiation therapy planning, targets to be irradiated and healthy and vulnerable tissues to be protected from irradiation are commonly delineated on CT scans. However, developments in magnetic resonance imaging (MRI) technology are progressively offering advantages. For instance, the hepatic metastasis detection rate has been found to be significantly higher in Gd–EOB–DTPAenhanced MRI than in CT. Therefore, recent studies highlight the importance of combining the information from CT and MRI to achieve the highest level of accuracy in radiotherapy and to facilitate accurate liver lesion description. In order to improve those two soft tissue pre operative planning scenarios, an accurate nonrigid image registration algorithm is clearly required. However, the vast majority of commercial systems only provide rigid registration. Voxel intensity measures have been shown to be robust measures of image similarity, and among them, Mutual Information (MI) is always the first candidate in multimodal registrations. However, one of the main drawbacks of Mutual Information is the absence of spatial information and the assumption that statistical relationships between images are the same over the whole domain of the image. The hypothesis of the present thesis is that incorporating spatial organ information into the registration process may improve the registration robustness and quality, taking advantage of the clinical segmentations availability. In this work, a multimodal nonrigid 3D registration framework using a new Organ- Focused Mutual Information metric (OF-MI) is proposed, validated and compared to the classical formulation of the Mutual Information (MI). It allows improving registration results in problematic areas by adding regional information into the similitude criterion taking advantage of actual segmentations availability in standard clinical protocols and allowing the statistical dependence between the two modalities differ among organs or regions. The proposed method is applied to CT and T1 weighted delayed Gd–EOB–DTPA-enhanced MRI registration as well as to register CT and MRI images in rectal intraoperative radiotherapy planning. Additionally, a 3D support segmentation algorithm based on Level-Sets has been developed for the incorporation of the organ information into the registration. The segmentation algorithm has been specifically designed for the healthy and functional liver volume estimation demonstrating good performance in a set of abdominal CT studies. Results show a statistical significant improvement of registration quality measures with OF-MI compared to MI with both simulated data (p<0.001) and real data in liver applications registering CT and Gd–EOB–DTPA-enhanced MRI and in registration for rectal radiotherapy planning using multi-organ OF-MI (p<0.05). Additionally, OF-MI presents more stable results with smaller dispersion than MI and a more robust behavior with respect to SNR changes and parameters variation. The proposed OF-MI always presents equal or better accuracy than the classical MI and consequently can be a very convenient alternative within applications where the robustness of the method and the facility to choose the parameters are particularly important.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Surgical simulators are currently essential within any laparoscopic training program because they provide a low-stakes, reproducible and reliable environment to acquire basic skills. The purpose of this study is to determine the training learning curve based on different metrics corresponding to five tasks included in SINERGIA laparoscopic virtual reality simulator. Methods: Thirty medical students without surgical experience participated in the study. Five tasks of SINERGIA were included: Coordination, Navigation, Navigation and touch, Accurate grasping and Coordinated pulling. Each participant was trained in SINERGIA. This training consisted of eight sessions (R1–R8) of the five mentioned tasks and was carried out in two consecutive days with four sessions per day. A statistical analysis was made, and the results of R1, R4 and R8 were pair-wise compared with Wilcoxon signed-rank test. Significance is considered at P value <0.005. Results: In total, 84.38% of the metrics provided by SINERGIA and included in this study show significant differences when comparing R1 and R8. Metrics are mostly improved in the first session of training (75.00% when R1 and R4 are compared vs. 37.50% when R4 and R8 are compared). In tasks Coordination and Navigation and touch, all metrics are improved. On the other hand, Navigation just improves 60% of the analyzed metrics. Most learning curves show an improvement with better results in the fulfillment of the different tasks. Conclusions: Learning curves of metrics that assess the basic psychomotor laparoscopic skills acquired in SINERGIA virtual reality simulator show a faster learning rate during the first part of the training. Nevertheless, eight repetitions of the tasks are not enough to acquire all psychomotor skills that can be trained in SINERGIA. Therefore, and based on these results together with previous works, SINERGIA could be used as training tool with a properly designed training program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To propose an automated patient-specific algorithm for the creation of accurate and smooth meshes of the aortic anatomy, to be used for evaluating rupture risk factors of abdominal aortic aneurysms (AAA). Finite element (FE) analyses and simulations require meshes to be smooth and anatomically accurate, capturing both the artery wall and the intraluminal thrombus (ILT). The two main difficulties are the modeling of the arterial bifurcations, and of the ILT, which has an arbitrary shape that is conforming to the aortic wall.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this work is twofold: first, to develop a process to automatically create parametric models of the aorta that can adapt to any possible intraoperative deformation of the vessel. Second, it intends to provide the tools needed to perform this deformation in real time, by means of a non-rigid registration method. This dynamically deformable model will later be used in a VR-based surgery guidance system for aortic catheterism procedures, showing the vessel changes in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Enhanced learning environments are arising with great success within the field of cognitive skills training in minimally invasive surgery (MIS) because they provides multiple benefits since they avoid time, spatial and cost constraints. TELMA [1,2] is a new technology enhanced learning platform that promotes collaborative and ubiquitous training of surgeons. This platform is based on four main modules: an authoring tool, a learning content and knowledge management system, an evaluation module and a professional network. TELMA has been designed and developed focused on the user; therefore it is necessary to carry out a user validation as final stage of the development. For this purpose, e-MIS validity [3] has been defined. This validation includes usability, contents and functionality validities both for the development and production stages of any e-Learning web platform. Using e-MIS validity, the e-Learning is fully validated since it includes subjective and objective metrics. The purpose of this study is to specify and apply a set of objective and subjective metrics using e-MIS validity to test usability, contents and functionality of TELMA environment within the development stage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Laparoscopic instrument tracking systems are an essential component in image-guided interventions and offer new possibilities to improve and automate objective assessment methods of surgical skills. In this study we present our system design to apply a third generation optical pose tracker (Micron- Tracker®) to laparoscopic practice. A technical evaluation of this design is performed in order to analyze its accuracy in computing the laparoscopic instrument tip position. Results show a stable fluctuation error over the entire analyzed workspace. The relative position errors are 1.776±1.675 mm, 1.817±1.762 mm, 1.854±1.740 mm, 2.455±2.164 mm, 2.545±2.496 mm, 2.764±2.342 mm, 2.512±2.493 mm for distances of 50, 100, 150, 200, 250, 300, and 350 mm, respectively. The accumulated distance error increases with the measured distance. The instrument inclination covered by the system is high, from 90 to 7.5 degrees. The system reports a low positional accuracy for the instrument tip.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La segmentación de imágenes puede plantearse como un problema de minimización de una energía discreta. Nos enfrentamos así a una doble cuestión: definir una energía cuyo mínimo proporcione la segmentación buscada y, una vez definida la energía, encontrar un mínimo absoluto de la misma. La primera parte de esta tesis aborda el segundo problema, y la segunda parte, en un contexto más aplicado, el primero. Las técnicas de minimización basadas en cortes de grafos permiten obtener el mínimo de una energía discreta en tiempo polinomial mediante algoritmos de tipo min-cut/max-flow. Sin embargo, estas técnicas solo pueden aplicarse a energías que son representabas por grafos. Un importante reto es estudiar qué energías son representabas así como encontrar un grafo que las represente, lo que equivale a encontrar una función gadget con variables adicionales. En la primera parte de este trabajo se estudian propiedades de las funciones gadgets que permiten acotar superiormente el número de variables adicionales. Además se caracterizan las energías con cuatro variables que son representabas, definiendo gadgets con dos variables adicionales. En la segunda parte, más práctica, se aborda el problema de segmentación de imágenes médicas, base en muchas ocasiones para la diagnosis y el seguimiento de terapias. La segmentación multi-atlas es una potente técnica de segmentación automática de imágenes médicas, con tres aspectos importantes a destacar: el tipo de registro entre los atlas y la imagen objetivo, la selección de atlas y el método de fusión de etiquetas. Este último punto puede formularse como un problema de minimización de una energía. A este respecto introducimos dos nuevas energías representables. La primera, de orden dos, se utiliza en la segmentación en hígado y fondo de imágenes abdominales obtenidas mediante tomografía axial computarizada. La segunda, de orden superior, se utiliza en la segmentación en hipocampos y fondo de imágenes cerebrales obtenidas mediante resonancia magnética. ABSTRACT The image segmentation can be described as the problem of minimizing a discrete energy. We face two problems: first, to define an energy whose minimum provides the desired segmentation and, second, once the energy is defined we must find its global minimum. The first part of this thesis addresses the second problem, and the second part, in a more applied context, the first problem. Minimization techniques based on graph cuts find the minimum of a discrete energy in polynomial time via min-cut/max-flow algorithms. Nevertheless, these techniques can only be applied to graph-representable energies. An important challenge is to study which energies are graph-representable and to construct graphs which represent these energies. This is the same as finding a gadget function with additional variables. In the first part there are studied the properties of gadget functions which allow the number of additional variables to be bounded from above. Moreover, the graph-representable energies with four variables are characterised and gadgets with two additional variables are defined for these. The second part addresses the application of these ideas to medical image segmentation. This is often the first step in computer-assisted diagnosis and monitoring therapy. Multiatlas segmentation is a powerful automatic segmentation technique for medical images, with three important aspects that are highlighted here: the registration between the atlas and the target image, the atlas selection, and the label fusion method. We formulate the label fusion method as a minimization problem and we introduce two new graph-representable energies. The first is a second order energy and it is used for the segmentation of the liver in computed tomography (CT) images. The second energy is a higher order energy and it is used for the segmentation of the hippocampus in magnetic resonance images (MRI).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: One of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows. Results: We present the application of the workflow-centric RO model for our bioinformatics case study. Three workflows were produced following recently defined Best Practices for workflow design. By modelling the experiment as an RO, we were able to automatically query the experiment and answer questions such as “which particular data was input to a particular workflow to test a particular hypothesis?”, and “which particular conclusions were drawn from a particular workflow?”. Conclusions: Applying a workflow-centric RO model to aggregate and annotate the resources used in a bioinformatics experiment, allowed us to retrieve the conclusions of the experiment in the context of the driving hypothesis, the executed workflows and their input data. The RO model is an extendable reference model that can be used by other systems as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assessment of diastolic chamber properties of the right ventricle by global fitting of pressure-volume data and conformational analysis of 3D + T echocardiographic sequences

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La frecuencia con la que se producen explosiones sobre edificios, ya sean accidentales o intencionadas, es reducida, pero sus efectos pueden ser catastróficos. Es deseable poder predecir de forma suficientemente precisa las consecuencias de estas acciones dinámicas sobre edificaciones civiles, entre las cuales las estructuras reticuladas de hormigón armado son una tipología habitual. En esta tesis doctoral se exploran distintas opciones prácticas para el modelado y cálculo numérico por ordenador de estructuras de hormigón armado sometidas a explosiones. Se emplean modelos numéricos de elementos finitos con integración explícita en el tiempo, que demuestran su capacidad efectiva para simular los fenómenos físicos y estructurales de dinámica rápida y altamente no lineales que suceden, pudiendo predecir los daños ocasionados tanto por la propia explosión como por el posible colapso progresivo de la estructura. El trabajo se ha llevado a cabo empleando el código comercial de elementos finitos LS-DYNA (Hallquist, 2006), desarrollando en el mismo distintos tipos de modelos de cálculo que se pueden clasificar en dos tipos principales: 1) modelos basados en elementos finitos de continuo, en los que se discretiza directamente el medio continuo mediante grados de libertad nodales de desplazamientos; 2) modelos basados en elementos finitos estructurales, mediante vigas y láminas, que incluyen hipótesis cinemáticas para elementos lineales o superficiales. Estos modelos se desarrollan y discuten a varios niveles distintos: 1) a nivel del comportamiento de los materiales, 2) a nivel de la respuesta de elementos estructurales tales como columnas, vigas o losas, y 3) a nivel de la respuesta de edificios completos o de partes significativas de los mismos. Se desarrollan modelos de elementos finitos de continuo 3D muy detallados que modelizan el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con un modelo constitutivo del hormigón CSCM (Murray et al., 2007), que tiene un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura. El acero se representa con un modelo constitutivo elastoplástico bilineal con rotura. Se modeliza la geometría precisa del hormigón mediante elementos finitos de continuo 3D y cada una de las barras de armado mediante elementos finitos tipo viga, con su posición exacta dentro de la masa de hormigón. La malla del modelo se construye mediante la superposición de los elementos de continuo de hormigón y los elementos tipo viga de las armaduras segregadas, que son obligadas a seguir la deformación del sólido en cada punto mediante un algoritmo de penalización, simulando así el comportamiento del hormigón armado. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF de continuo. Con estos modelos de EF de continuo se analiza la respuesta estructural de elementos constructivos (columnas, losas y pórticos) frente a acciones explosivas. Asimismo se han comparado con resultados experimentales, de ensayos sobre vigas y losas con distintas cargas de explosivo, verificándose una coincidencia aceptable y permitiendo una calibración de los parámetros de cálculo. Sin embargo estos modelos tan detallados no son recomendables para analizar edificios completos, ya que el elevado número de elementos finitos que serían necesarios eleva su coste computacional hasta hacerlos inviables para los recursos de cálculo actuales. Adicionalmente, se desarrollan modelos de elementos finitos estructurales (vigas y láminas) que, con un coste computacional reducido, son capaces de reproducir el comportamiento global de la estructura con una precisión similar. Se modelizan igualmente el hormigón en masa y el acero de armado de forma segregada. El hormigón se representa con el modelo constitutivo del hormigón EC2 (Hallquist et al., 2013), que también presenta un comportamiento inelástico, con diferente respuesta a tracción y compresión, endurecimiento, daño por fisuración y compresión, y rotura, y se usa en elementos finitos tipo lámina. El acero se representa de nuevo con un modelo constitutivo elastoplástico bilineal con rotura, usando elementos finitos tipo viga. Se modeliza una geometría equivalente del hormigón y del armado, y se tiene en cuenta la posición relativa del acero dentro de la masa de hormigón. Las mallas de ambos se unen mediante nodos comunes, produciendo una respuesta conjunta. En este trabajo se denominarán a estos modelos simplificadamente como modelos de EF estructurales. Con estos modelos de EF estructurales se simulan los mismos elementos constructivos que con los modelos de EF de continuo, y comparando sus respuestas estructurales frente a explosión se realiza la calibración de los primeros, de forma que se obtiene un comportamiento estructural similar con un coste computacional reducido. Se comprueba que estos mismos modelos, tanto los modelos de EF de continuo como los modelos de EF estructurales, son precisos también para el análisis del fenómeno de colapso progresivo en una estructura, y que se pueden utilizar para el estudio simultáneo de los daños de una explosión y el posterior colapso. Para ello se incluyen formulaciones que permiten considerar las fuerzas debidas al peso propio, sobrecargas y los contactos de unas partes de la estructura sobre otras. Se validan ambos modelos con un ensayo a escala real en el que un módulo con seis columnas y dos plantas colapsa al eliminar una de sus columnas. El coste computacional del modelo de EF de continuo para la simulación de este ensayo es mucho mayor que el del modelo de EF estructurales, lo cual hace inviable su aplicación en edificios completos, mientras que el modelo de EF estructurales presenta una respuesta global suficientemente precisa con un coste asumible. Por último se utilizan los modelos de EF estructurales para analizar explosiones sobre edificios de varias plantas, y se simulan dos escenarios con cargas explosivas para un edificio completo, con un coste computacional moderado. The frequency of explosions on buildings whether they are intended or accidental is small, but they can have catastrophic effects. Being able to predict in a accurate enough manner the consequences of these dynamic actions on civil buildings, among which frame-type reinforced concrete buildings are a frequent typology is desirable. In this doctoral thesis different practical options for the modeling and computer assisted numerical calculation of reinforced concrete structures submitted to explosions are explored. Numerical finite elements models with explicit time-based integration are employed, demonstrating their effective capacity in the simulation of the occurring fast dynamic and highly nonlinear physical and structural phenomena, allowing to predict the damage caused by the explosion itself as well as by the possible progressive collapse of the structure. The work has been carried out with the commercial finite elements code LS-DYNA (Hallquist, 2006), developing several types of calculation model classified in two main types: 1) Models based in continuum finite elements in which the continuous medium is discretized directly by means of nodal displacement degrees of freedom; 2) Models based on structural finite elements, with beams and shells, including kinematic hypothesis for linear and superficial elements. These models are developed and discussed at different levels: 1) material behaviour, 2) response of structural elements such as columns, beams and slabs, and 3) response of complete buildings or significative parts of them. Very detailed 3D continuum finite element models are developed, modeling mass concrete and reinforcement steel in a segregated manner. Concrete is represented with a constitutive concrete model CSCM (Murray et al., 2007), that has an inelastic behaviour, with different tension and compression response, hardening, cracking and compression damage and failure. The steel is represented with an elastic-plastic bilinear model with failure. The actual geometry of the concrete is modeled with 3D continuum finite elements and every and each of the reinforcing bars with beam-type finite elements, with their exact position in the concrete mass. The mesh of the model is generated by the superposition of the concrete continuum elements and the beam-type elements of the segregated reinforcement, which are made to follow the deformation of the solid in each point by means of a penalty algorithm, reproducing the behaviour of reinforced concrete. In this work these models will be called continuum FE models as a simplification. With these continuum FE models the response of construction elements (columns, slabs and frames) under explosive actions are analysed. They have also been compared with experimental results of tests on beams and slabs with various explosive charges, verifying an acceptable coincidence and allowing a calibration of the calculation parameters. These detailed models are however not advised for the analysis of complete buildings, as the high number of finite elements necessary raises its computational cost, making them unreliable for the current calculation resources. In addition to that, structural finite elements (beams and shells) models are developed, which, while having a reduced computational cost, are able to reproduce the global behaviour of the structure with a similar accuracy. Mass concrete and reinforcing steel are also modeled segregated. Concrete is represented with the concrete constitutive model EC2 (Hallquist et al., 2013), which also presents an inelastic behaviour, with a different tension and compression response, hardening, compression and cracking damage and failure, and is used in shell-type finite elements. Steel is represented once again with an elastic-plastic bilineal with failure constitutive model, using beam-type finite elements. An equivalent geometry of the concrete and the steel is modeled, considering the relative position of the steel inside the concrete mass. The meshes of both sets of elements are bound with common nodes, therefore producing a joint response. These models will be called structural FE models as a simplification. With these structural FE models the same construction elements as with the continuum FE models are simulated, and by comparing their response under explosive actions a calibration of the former is carried out, resulting in a similar response with a reduced computational cost. It is verified that both the continuum FE models and the structural FE models are also accurate for the analysis of the phenomenon of progressive collapse of a structure, and that they can be employed for the simultaneous study of an explosion damage and the resulting collapse. Both models are validated with an experimental full-scale test in which a six column, two floors module collapses after the removal of one of its columns. The computational cost of the continuum FE model for the simulation of this test is a lot higher than that of the structural FE model, making it non-viable for its application to full buildings, while the structural FE model presents a global response accurate enough with an admissible cost. Finally, structural FE models are used to analyze explosions on several story buildings, and two scenarios are simulated with explosive charges for a full building, with a moderate computational cost.