148 resultados para DSA
Resumo:
Double fenestration of the anterior communicating artery (ACoA) complex associated with an aneurysm is a very rare finding and is usually caused by ACoA duplication and the presence of a median artery of the corpus callosum (MACC). We present a patient in whom double fenestration was not associated with ACoA duplication or even with MACC, representing therefore, a previously unreported anatomic variation. A 43 year old woman experienced sudden headache and the CT scans showed subarachnoid haemorrhage (SAH). On admission, her clinical condition was consistent with Hunt and Hess grade II. Conventional digital subtraction angiography (DSA) was performed and revealed multiple intracranial aneurysms arising from both middle cerebral arteries (MCA) and from the ACoA. Three-dimensional rotational angiography (3D-RA) disclosed a double fenestration of the ACoA complex which was missed by DSA. The patient underwent a classic pterional approach in order to achieve occlusion of both left MCA and ACoA aneurysms by surgical clipping. The post-operative period was uneventful. A rare anatomical variation characterised by a double fenestration not associated with ACoA duplication or MACC is described. The DSA images missed the double fenestration which was disclosed by 3D-RA, indicating the importance of 3D-RA in the diagnosis and surgical planning of intracranial aneurysms.
Resumo:
OBJECT: Preliminary experience with the C-Port Flex-A Anastomosis System (Cardica, Inc.) to enable rapid automated anastomosis has been reported in coronary artery bypass surgery. The goal of the current study was to define the feasibility and safety of this method for high-flow extracranial-intracranial (EC-IC) bypass surgery in a clinical series. METHODS: In a prospective study design, patients with symptomatic carotid artery (CA) occlusion were selected for C-Port-assisted high-flow EC-IC bypass surgery if they met the following criteria: 1) transient or moderate permanent symptoms of focal ischemia; 2) CA occlusion; 3) hemodynamic instability; and 4) had provided informed consent. Bypasses were done using a radial artery graft that was proximally anastomosed to the superficial temporal artery trunk, the cervical external, or common CA. All distal cerebral anastomoses were performed on M2 branches using the C-Port Flex-A system. RESULTS: Within 6 months, 10 patients were enrolled in the study. The distal automated anastomosis could be accomplished in all patients; the median temporary occlusion time was 16.6+/-3.4 minutes. Intraoperative digital subtraction angiography (DSA) confirmed good bypass function in 9 patients, and in 1 the anastomosis was classified as fair. There was 1 major perioperative complication that consisted of the creation of a pseudoaneurysm due to a hardware problem. In all but 1 case the bypass was shown to be patent on DSA after 7 days; furthermore, in 1 patient a late occlusion developed due to vasospasm after a sylvian hemorrhage. One-week follow-up DSA revealed transient asymptomatic extracranial spasm of the donor artery and the radial artery graft in 1 case. Two patients developed a limited zone of infarction on CT scanning during the follow-up course. CONCLUSIONS: In patients with symptomatic CA occlusion, C-Port Flex-A-assisted high-flow EC-IC bypass surgery is a technically feasible procedure. The system needs further modification to achieve a faster and safer anastomosis to enable a conclusive comparison with standard and laser-assisted methods for high-flow bypass surgery.
Resumo:
OBJECTIVE: The standard technique of two-dimensional intra-arterial digital subtraction angiography (2D-DSA) for the imaging of experimental rabbit aneurysms is invasive and has considerable surgical risks. Therefore, minimally invasive techniques ideally providing three-dimensional imaging for intervention planning and follow-up are needed. This study evaluates the feasibility and quality of three-dimensional 3-T magnetic resonance angiography (3D-3T-MRA) and compares 3D-3T-MRA with 2D-DSA in experimental aneurysms in the rabbit. METHOD: Three microsurgically created aneurysms in three rabbits were evaluated using 2D-DSA and 3D-3T-MRA. Imaging of the aneurysms was performed 2 weeks after creation using 2D-DSA and contrast-enhanced (CE) MRA. Measurements included aneurysm dome (length and width) and aneurysm neck. Aneurysm volumes were determined using CE-MRA. RESULTS: The measurements of the aneurysms' dimensions and the evaluation of vicinity vessels with both techniques showed a good correlation. The mean aneurysm length, aneurysm width and neck width measured with DSA (6.9, 4.1 and 2.8 mm, respectively) correlated with the measurements performed in 3D-3T-MRA (6.9, 4 and 2.5 mm, respectively). The mean aneurysm volumes measured with CE-MRA was 46.7 mm(3). CONCLUSION: 3D-3T CE-MRA is feasible and less invasive and is a safer imaging alternative to DSA for experimental aneurysm. Additionally, aneurysm technique this precise offers the possibility of repetitive 3D aneurysm volumetry for long-term follow-up studies after endovascular aneurysm occlusion.
Resumo:
We report on oxygenation changes noninvasively recorded by multichannel continuous-wave near infrared spectroscopy (CW-NIRS) during endovascular neuroradiologic interventions requiring temporary balloon occlusion of arteries supplying the cerebral circulation. Digital subtraction angiography (DSA) provides reference data on the site, timing, and effectiveness of the flow stagnation as well as on the amount and direction of collateral circulation. This setting allows us to relate CW-NIRS findings to brain specific perfusion changes. We focused our analysis on the transition from normal perfusion to vessel occlusion, i.e., before hypoxia becomes clinically apparent. The localization of the maximal response correlated either with the core (occlusion of the middle cerebral artery) or with the watershed areas (occlusion of the internal carotid artery) of the respective vascular territories. In one patient with clinically and angiographically confirmed insufficient collateral flow during carotid artery occlusion, the total hemoglobin concentration became significantly asymmetric, with decreased values in the ipsilateral watershed area and contralaterally increased values. Multichannel CW-NIRS monitoring might serve as an objective and early predictive marker of critical perfusion changes during interventions-to prevent hypoxic damage of the brain. It also might provide valuable human reference data on oxygenation changes as they typically occur during acute stroke.
Resumo:
OBJECTIVES Susceptibility-weighted imaging (SWI) enables visualization of thrombotic material in acute ischemic stroke. We aimed to validate the accuracy of thrombus depiction on SWI compared to time-of-flight MRA (TOF-MRA), first-pass gadolinium-enhanced MRA (GE-MRA) and digital subtraction angiography (DSA). Furthermore, we analysed the impact of thrombus length on reperfusion success with endovascular therapy. METHODS Consecutive patients with acute ischemic stroke due to middle cerebral artery (MCA) occlusions undergoing endovascular recanalization were screened. Only patients with a pretreatment SWI were included. Thrombus visibility and location on SWI were compared to those on TOF-MRA, GE-MRA and DSA. The association between thrombus length on SWI and reperfusion success was studied. RESULTS Eighty-four of the 88 patients included (95.5 %) showed an MCA thrombus on SWI. Strong correlations between thrombus location on SWI and that on TOF-MRA (Pearson's correlation coefficient 0.918, P < 0.001), GE-MRA (0.887, P < 0.001) and DSA (0.841, P < 0.001) were observed. Successful reperfusion was not significantly related to thrombus length on SWI (P = 0.153; binary logistic regression). CONCLUSIONS In MCA occlusion thrombus location as seen on SWI correlates well with angiographic findings. In contrast to intravenous thrombolysis, thrombus length appears to have no impact on reperfusion success of endovascular therapy. KEY POINTS • SWI helps in assessing location and length of thrombi in the MCA • SWI, MRA and DSA are equivalent in detecting the MCA occlusion site • SWI is superior in identifying the distal end of the thrombus • Stent retrievers should be deployed over the distal thrombus end • Thrombus length did not affect success of endovascular reperfusion guided by SWI.
Resumo:
BACKGROUND The extent of hypoperfusion is an important prognostic factor in acute ischemic stroke. Previous studies have postulated that the extent of prominent cortical veins (PCV) on susceptibility-weighted imaging (SWI) reflects the extent of hypoperfusion. Our aim was to investigate, whether there is an association between PCV and the grade of leptomeningeal arterial collateralization in acute ischemic stroke. In addition, we analyzed the correlation between SWI and perfusion-MRI findings. METHODS 33 patients with acute ischemic stroke due to a thromboembolic M1-segment occlusion underwent MRI followed by digital subtraction angiography (DSA) and were subdivided into two groups with very good to good and moderate to no leptomeningeal collaterals according to the DSA. The extent of PCV on SWI, diffusion restriction (DR) on diffusion-weighted imaging (DWI) and prolonged mean transit time (MTT) on perfusion-imaging were graded according to the Alberta Stroke Program Early CT Score (ASPECTS). The National Institutes of Health Stroke Scale (NIHSS) scores at admission and the time between symptom onset and MRI were documented. RESULTS 20 patients showed very good to good and 13 patients poor to no collateralization. PCV-ASPECTS was significantly higher for cases with good leptomeningeal collaterals versus those with poor leptomeningeal collaterals (mean 4.1 versus 2.69; p=0.039). MTT-ASPECTS was significantly lower than PCV-ASPECTS in all 33 patients (mean 1.0 versus 3.5; p<0.00). CONCLUSIONS In our small study the grade of leptomeningeal collateralization correlates with the extent of PCV in SWI in acute ischemic stroke, due to the deoxyhemoglobin to oxyhemoglobin ratio. Consequently, extensive PCV correlate with poor leptomeningeal collateralization while less pronounced PCV correlate with good leptomeningeal collateralization. Further SWI is a very helpful tool in detecting tissue at risk but cannot replace PWI since MTT detects significantly more ill-perfused areas than SWI, especially in good collateralized subjects.
Resumo:
BACKGROUND AND PURPOSE The prevalence and clinical importance of primarily fragmented thrombi in patients with acute ischemic stroke remains elusive. Whole-brain SWI was used to detect multiple thrombus fragments, and their clinical significance was analyzed. MATERIALS AND METHODS Pretreatment SWI was analyzed for the presence of a single intracranial thrombus or multiple intracranial thrombi. Associations with baseline clinical characteristics, complications, and clinical outcome were studied. RESULTS Single intracranial thrombi were detected in 300 (92.6%), and multiple thrombi, in 24 of 324 patients (7.4%). In 23 patients with multiple thrombi, all thrombus fragments were located in the vascular territory distal to the primary occluding thrombus; in 1 patient, thrombi were found both in the anterior and posterior circulation. Only a minority of thrombus fragments were detected on TOF-MRA, first-pass gadolinium-enhanced MRA, or DSA. Patients with multiple intracranial thrombi presented with more severe symptoms (median NIHSS scores, 15 versus 11; P = .014) and larger ischemic areas (median DWI ASPECTS, 5 versus 7; P = .006); good collaterals, rated on DSA, were fewer than those in patients with a single thrombus (21.1% versus 44.2%, P = .051). The presence of multiple thrombi was a predictor of unfavorable outcome at 3 months (P = .040; OR, 0.251; 95% CI, 0.067-0.939). CONCLUSIONS Patients with multiple intracranial thrombus fragments constitute a small subgroup of patients with stroke with a worse outcome than patients with single thrombi.
Resumo:
INTRODUCTION Diagnostic tools to show emboli reliably and protection techniques against embolization when employing stent retrievers are necessary to improve endovascular stroke therapy. The aim of the present study was to investigate iatrogenic emboli using susceptibility-weighted imaging (SWI) in an open series of patients who had been treated with stent retriever thrombectomy using emboli protection techniques. METHODS Patients with anterior circulation stroke examined with MRI before and after stent retriever thrombectomy were assessed for iatrogenic embolic events. Thrombectomy was performed in flow arrest and under aspiration using a balloon-mounted guiding catheter, a distal access catheter, or both. RESULTS In 13 of 57 patients (22.8 %) post-interventional SWI sequences detected 16 microemboli. Three of them were associated with small ischemic lesions on diffusion-weighted imaging (DWI). None of the microemboli were located in a new vascular territory, none showed clinical signs, and all 13 patients have been rated as Thrombolysis in Cerebral Infarction (TICI) 2b (n = 3) or 3 (n = 10). Retrospective reevaluation of the digital subtraction angiography (DSA) detected discrete flow stagnation nearby the iatrogenic microemboli in four patients with a positive persistent collateral sign in one. CONCLUSION Our study demonstrates two things: First, SWI seems to be more sensitive to detect emboli than DWI and DSA and, second, proximal or distal protected stent retriever thrombectomy seems to prevent iatrogenic embolization into new vascular territories during retraction of the thrombus, but not downstream during mobilization of the thrombus. Both techniques should be investigated and refined further.
Resumo:
Ischemic complications during aneurysm surgery are a frequent cause of postoperative infarctions and new neurological deficits. In this article, we discuss imaging and neurophysiological tools that may help the surgeon to detect intraoperative ischemia. The strength of intraoperative digital subtraction angiography (DSA) is the full view of the arterial and venous vessel. DSA is the gold standard in complex and giant aneurysms, but due to certain disadvantages, it cannot be considered standard of care. Microvascular Doppler sonography is probably the fastest diagnostic tool and can quickly aid diagnosis of large vessel occlusions. Intraoperative indocyanine green videoangiography is the best tool to assess flow in perforating and larger arteries, as well as occlusion of the aneurysm sac. Intraoperative neurophysiological monitoring with somatosensory and motor evoked potentials indirectly measures blood flow by recording neuronal function. It covers all causes of intraoperative ischemia, provided that ischemia occurs in the brain areas under surveillance. However, every method has advantages and disadvantages. No single method is superior to the others in every aspect. Therefore, it is very important for the neurosurgeon to know the strengths and weaknesses of each tool in order to have them available, to know how to use them for each individual situation, and to be ready to apply them within the time window for reversible cerebral ischemia.
Resumo:
El accidente de rotura de tubos de un generador de vapor (Steam Generator Tube Rupture, SGTR) en los reactores de agua a presión es uno de los transitorios más exigentes desde el punto de vista de operación. Los transitorios de SGTR son especiales, ya que podría dar lugar a emisiones radiológicas al exterior sin necesidad de daño en el núcleo previo o sin que falle la contención, ya que los SG pueden constituir una vía directa desde el reactor al medio ambiente en este transitorio. En los análisis de seguridad, el SGTR se analiza desde un punto determinista y probabilista, con distintos enfoques con respecto a las acciones del operador y las consecuencias analizadas. Cuando comenzaron los Análisis Deterministas de Seguridad (DSA), la forma de analizar el SGTR fue sin dar crédito a la acción del operador durante los primeros 30 min del transitorio, lo que suponía que el grupo de operación era capaz de detener la fuga por el tubo roto dentro de ese tiempo. Sin embargo, los diferentes casos reales de accidentes de SGTR sucedidos en los EE.UU. y alrededor del mundo demostraron que los operadores pueden emplear más de 30 minutos para detener la fuga en la vida real. Algunas metodologías fueron desarrolladas en los EEUU y en Europa para abordar esa cuestión. En el Análisis Probabilista de Seguridad (PSA), las acciones del operador se tienen en cuenta para diseñar los cabeceros en el árbol de sucesos. Los tiempos disponibles se utilizan para establecer los criterios de éxito para dichos cabeceros. Sin embargo, en una secuencia dinámica como el SGTR, las acciones de un operador son muy dependientes del tiempo disponible por las acciones humanas anteriores. Además, algunas de las secuencias de SGTR puede conducir a la liberación de actividad radiológica al exterior sin daño previo en el núcleo y que no se tienen en cuenta en el APS, ya que desde el punto de vista de la integridad de núcleo son de éxito. Para ello, para analizar todos estos factores, la forma adecuada de analizar este tipo de secuencias pueden ser a través de una metodología que contemple Árboles de Sucesos Dinámicos (Dynamic Event Trees, DET). En esta Tesis Doctoral se compara el impacto en la evolución temporal y la dosis al exterior de la hipótesis más relevantes encontradas en los Análisis Deterministas a nivel mundial. La comparación se realiza con un modelo PWR Westinghouse de tres lazos (CN Almaraz) con el código termohidráulico TRACE, con hipótesis de estimación óptima, pero con hipótesis deterministas como criterio de fallo único o pérdida de energía eléctrica exterior. Las dosis al exterior se calculan con RADTRAD, ya que es uno de los códigos utilizados normalmente para los cálculos de dosis del SGTR. El comportamiento del reactor y las dosis al exterior son muy diversas, según las diferentes hipótesis en cada metodología. Por otra parte, los resultados están bastante lejos de los límites de regulación, pese a los conservadurismos introducidos. En el siguiente paso de la Tesis Doctoral, se ha realizado un análisis de seguridad integrado del SGTR según la metodología ISA, desarrollada por el Consejo de Seguridad Nuclear español (CSN). Para ello, se ha realizado un análisis termo-hidráulico con un modelo de PWR Westinghouse de 3 lazos con el código MAAP. La metodología ISA permite la obtención del árbol de eventos dinámico del SGTR, teniendo en cuenta las incertidumbres en los tiempos de actuación del operador. Las simulaciones se realizaron con SCAIS (sistema de simulación de códigos para la evaluación de la seguridad integrada), que incluye un acoplamiento dinámico con MAAP. Las dosis al exterior se calcularon también con RADTRAD. En los resultados, se han tenido en cuenta, por primera vez en la literatura, las consecuencias de las secuencias en términos no sólo de daños en el núcleo sino de dosis al exterior. Esta tesis doctoral demuestra la necesidad de analizar todas las consecuencias que contribuyen al riesgo en un accidente como el SGTR. Para ello se ha hecho uso de una metodología integrada como ISA-CSN. Con este enfoque, la visión del DSA del SGTR (consecuencias radiológicas) se une con la visión del PSA del SGTR (consecuencias de daño al núcleo) para evaluar el riesgo total del accidente. Abstract Steam Generator Tube Rupture accidents in Pressurized Water Reactors are known to be one of the most demanding transients for the operating crew. SGTR are special transient as they could lead to radiological releases without core damage or containment failure, as they can constitute a direct path to the environment. The SGTR is analyzed from a Deterministic and Probabilistic point of view in the Safety Analysis, although the assumptions of the different approaches regarding the operator actions are quite different. In the beginning of Deterministic Safety Analysis, the way of analyzing the SGTR was not crediting the operator action for the first 30 min of the transient, assuming that the operating crew was able to stop the primary to secondary leakage within that time. However, the different real SGTR accident cases happened in the USA and over the world demonstrated that operators can took more than 30 min to stop the leakage in actual sequences. Some methodologies were raised in the USA and in Europe to cover that issue. In the Probabilistic Safety Analysis, the operator actions are taken into account to set the headers in the event tree. The available times are used to establish the success criteria for the headers. However, in such a dynamic sequence as SGTR, the operator actions are very dependent on the time available left by the other human actions. Moreover, some of the SGTR sequences can lead to offsite doses without previous core damage and they are not taken into account in PSA as from the point of view of core integrity are successful. Therefore, to analyze all this factors, the appropriate way of analyzing that kind of sequences could be through a Dynamic Event Tree methodology. This Thesis compares the impact on transient evolution and the offsite dose of the most relevant hypothesis of the different SGTR analysis included in the Deterministic Safety Analysis. The comparison is done with a PWR Westinghouse three loop model in TRACE code (Almaraz NPP), with best estimate assumptions but including deterministic hypothesis such as single failure criteria or loss of offsite power. The offsite doses are calculated with RADTRAD code, as it is one of the codes normally used for SGTR offsite dose calculations. The behaviour of the reactor and the offsite doses are quite diverse depending on the different assumptions made in each methodology. On the other hand, although the high conservatism, such as the single failure criteria, the results are quite far from the regulatory limits. In the next stage of the Thesis, the Integrated Safety Assessment (ISA) methodology, developed by the Spanish Nuclear Safety Council (CSN), has been applied to a thermohydraulical analysis of a Westinghouse 3-loop PWR plant with the MAAP code. The ISA methodology allows obtaining the SGTR Dynamic Event Tree taking into account the uncertainties on the operator actuation times. Simulations are performed with SCAIS (Simulation Code system for Integrated Safety Assessment), which includes a dynamic coupling with MAAP thermal hydraulic code. The offsite doses are calculated also with RADTRAD. The results shows the consequences of the sequences in terms not only of core damage but of offsite doses. This Thesis shows the need of analyzing all the consequences in an accident such as SGTR. For that, an it has been used an integral methodology like ISA-CSN. With this approach, the DSA vision of the SGTR (radiological consequences) is joined with the PSA vision of the SGTR (core damage consequences) to measure the total risk of the accident.
Resumo:
A Steam Generator Tube Rupture (SGTR) in a Pressurized Water Reactor (PWR) can lead to an atmospheric release bypassing the containment via the secondary system and exiting though the Pressurized Operating Relief Valves of the affected Steam Generator. That is why SGTR historically have been treated in a special way in the different Deterministic Safety Analysis (DSA), focusing on the radioactive release more than the possibility of core damage, as it is done in the other Loss of Coolant Accidents(LOCAs).
Resumo:
Steam Generator Tube Rupture (SGTR) sequences in Pressurized Water Reactors are known to be one of the most demanding transients for the operating crew. SGTR are a special kind of transient as they could lead to radiological releases without core damage or containment failure, as they can constitute a direct path from the reactor coolant system to the environment. The first methodology used to perform the Deterministic Safety Analysis (DSA) of a SGTR did not credit the operator action for the first 30 min of the transient, assuming that the operating crew was able to stop the primary to secondary leakage within that period of time. However, the different real SGTR accident cases happened in the USA and over the world demonstrated that the operators usually take more than 30 min to stop the leakage in actual sequences. Some methodologies were raised to overcome that fact, considering operator actions from the beginning of the transient, as it is done in Probabilistic Safety Analysis. This paper presents the results of comparing different assumptions regarding the single failure criteria and the operator action taken from the most common methodologies included in the different Deterministic Safety Analysis. One single failure criteria that has not been analysed previously in the literature is proposed and analysed in this paper too. The comparison is done with a PWR Westinghouse three loop model in TRACE code (Almaraz NPP) with best estimate assumptions but including deterministic hypothesis such as single failure criteria or loss of offsite power. The behaviour of the reactor is quite diverse depending on the different assumptions made regarding the operator actions. On the other hand, although there are high conservatisms included in the hypothesis, as the single failure criteria, all the results are quite far from the regulatory limits. In addition, some improvements to the Emergency Operating Procedures to minimize the offsite release from the damaged SG in case of a SGTR are outlined taking into account the offsite dose sensitivity results.
Resumo:
El análisis determinista de seguridad (DSA) es el procedimiento que sirve para diseñar sistemas, estructuras y componentes relacionados con la seguridad en las plantas nucleares. El DSA se basa en simulaciones computacionales de una serie de hipotéticos accidentes representativos de la instalación, llamados escenarios base de diseño (DBS). Los organismos reguladores señalan una serie de magnitudes de seguridad que deben calcularse en las simulaciones, y establecen unos criterios reguladores de aceptación (CRA), que son restricciones que deben cumplir los valores de esas magnitudes. Las metodologías para realizar los DSA pueden ser de 2 tipos: conservadoras o realistas. Las metodologías conservadoras utilizan modelos predictivos e hipótesis marcadamente pesimistas, y, por ello, relativamente simples. No necesitan incluir un análisis de incertidumbre de sus resultados. Las metodologías realistas se basan en hipótesis y modelos predictivos realistas, generalmente mecanicistas, y se suplementan con un análisis de incertidumbre de sus principales resultados. Se les denomina también metodologías BEPU (“Best Estimate Plus Uncertainty”). En ellas, la incertidumbre se representa, básicamente, de manera probabilista. Para metodologías conservadores, los CRA son, simplemente, restricciones sobre valores calculados de las magnitudes de seguridad, que deben quedar confinados en una “región de aceptación” de su recorrido. Para metodologías BEPU, el CRA no puede ser tan sencillo, porque las magnitudes de seguridad son ahora variables inciertas. En la tesis se desarrolla la manera de introducción de la incertidumbre en los CRA. Básicamente, se mantiene el confinamiento a la misma región de aceptación, establecida por el regulador. Pero no se exige el cumplimiento estricto sino un alto nivel de certidumbre. En el formalismo adoptado, se entiende por ello un “alto nivel de probabilidad”, y ésta corresponde a la incertidumbre de cálculo de las magnitudes de seguridad. Tal incertidumbre puede considerarse como originada en los inputs al modelo de cálculo, y propagada a través de dicho modelo. Los inputs inciertos incluyen las condiciones iniciales y de frontera al cálculo, y los parámetros empíricos de modelo, que se utilizan para incorporar la incertidumbre debida a la imperfección del modelo. Se exige, por tanto, el cumplimiento del CRA con una probabilidad no menor a un valor P0 cercano a 1 y definido por el regulador (nivel de probabilidad o cobertura). Sin embargo, la de cálculo de la magnitud no es la única incertidumbre existente. Aunque un modelo (sus ecuaciones básicas) se conozca a la perfección, la aplicación input-output que produce se conoce de manera imperfecta (salvo que el modelo sea muy simple). La incertidumbre debida la ignorancia sobre la acción del modelo se denomina epistémica; también se puede decir que es incertidumbre respecto a la propagación. La consecuencia es que la probabilidad de cumplimiento del CRA no se puede conocer a la perfección; es una magnitud incierta. Y así se justifica otro término usado aquí para esta incertidumbre epistémica: metaincertidumbre. Los CRA deben incorporar los dos tipos de incertidumbre: la de cálculo de la magnitud de seguridad (aquí llamada aleatoria) y la de cálculo de la probabilidad (llamada epistémica o metaincertidumbre). Ambas incertidumbres pueden introducirse de dos maneras: separadas o combinadas. En ambos casos, el CRA se convierte en un criterio probabilista. Si se separan incertidumbres, se utiliza una probabilidad de segundo orden; si se combinan, se utiliza una probabilidad única. Si se emplea la probabilidad de segundo orden, es necesario que el regulador imponga un segundo nivel de cumplimiento, referido a la incertidumbre epistémica. Se denomina nivel regulador de confianza, y debe ser un número cercano a 1. Al par formado por los dos niveles reguladores (de probabilidad y de confianza) se le llama nivel regulador de tolerancia. En la Tesis se razona que la mejor manera de construir el CRA BEPU es separando las incertidumbres, por dos motivos. Primero, los expertos defienden el tratamiento por separado de incertidumbre aleatoria y epistémica. Segundo, el CRA separado es (salvo en casos excepcionales) más conservador que el CRA combinado. El CRA BEPU no es otra cosa que una hipótesis sobre una distribución de probabilidad, y su comprobación se realiza de forma estadística. En la tesis, los métodos estadísticos para comprobar el CRA BEPU en 3 categorías, según estén basados en construcción de regiones de tolerancia, en estimaciones de cuantiles o en estimaciones de probabilidades (ya sea de cumplimiento, ya sea de excedencia de límites reguladores). Según denominación propuesta recientemente, las dos primeras categorías corresponden a los métodos Q, y la tercera, a los métodos P. El propósito de la clasificación no es hacer un inventario de los distintos métodos en cada categoría, que son muy numerosos y variados, sino de relacionar las distintas categorías y citar los métodos más utilizados y los mejor considerados desde el punto de vista regulador. Se hace mención especial del método más utilizado hasta el momento: el método no paramétrico de Wilks, junto con su extensión, hecha por Wald, al caso multidimensional. Se decribe su método P homólogo, el intervalo de Clopper-Pearson, típicamente ignorado en el ámbito BEPU. En este contexto, se menciona el problema del coste computacional del análisis de incertidumbre. Los métodos de Wilks, Wald y Clopper-Pearson requieren que la muestra aleatortia utilizada tenga un tamaño mínimo, tanto mayor cuanto mayor el nivel de tolerancia exigido. El tamaño de muestra es un indicador del coste computacional, porque cada elemento muestral es un valor de la magnitud de seguridad, que requiere un cálculo con modelos predictivos. Se hace especial énfasis en el coste computacional cuando la magnitud de seguridad es multidimensional; es decir, cuando el CRA es un criterio múltiple. Se demuestra que, cuando las distintas componentes de la magnitud se obtienen de un mismo cálculo, el carácter multidimensional no introduce ningún coste computacional adicional. Se prueba así la falsedad de una creencia habitual en el ámbito BEPU: que el problema multidimensional sólo es atacable desde la extensión de Wald, que tiene un coste de computación creciente con la dimensión del problema. En el caso (que se da a veces) en que cada componente de la magnitud se calcula independientemente de los demás, la influencia de la dimensión en el coste no se puede evitar. Las primeras metodologías BEPU hacían la propagación de incertidumbres a través de un modelo sustitutivo (metamodelo o emulador) del modelo predictivo o código. El objetivo del metamodelo no es su capacidad predictiva, muy inferior a la del modelo original, sino reemplazar a éste exclusivamente en la propagación de incertidumbres. Para ello, el metamodelo se debe construir con los parámetros de input que más contribuyan a la incertidumbre del resultado, y eso requiere un análisis de importancia o de sensibilidad previo. Por su simplicidad, el modelo sustitutivo apenas supone coste computacional, y puede estudiarse exhaustivamente, por ejemplo mediante muestras aleatorias. En consecuencia, la incertidumbre epistémica o metaincertidumbre desaparece, y el criterio BEPU para metamodelos se convierte en una probabilidad simple. En un resumen rápido, el regulador aceptará con más facilidad los métodos estadísticos que menos hipótesis necesiten; los exactos más que los aproximados; los no paramétricos más que los paramétricos, y los frecuentistas más que los bayesianos. El criterio BEPU se basa en una probabilidad de segundo orden. La probabilidad de que las magnitudes de seguridad estén en la región de aceptación no sólo puede asimilarse a una probabilidad de éxito o un grado de cumplimiento del CRA. También tiene una interpretación métrica: representa una distancia (dentro del recorrido de las magnitudes) desde la magnitud calculada hasta los límites reguladores de aceptación. Esta interpretación da pie a una definición que propone esta tesis: la de margen de seguridad probabilista. Dada una magnitud de seguridad escalar con un límite superior de aceptación, se define el margen de seguridad (MS) entre dos valores A y B de la misma como la probabilidad de que A sea menor que B, obtenida a partir de las incertidumbres de A y B. La definición probabilista de MS tiene varias ventajas: es adimensional, puede combinarse de acuerdo con las leyes de la probabilidad y es fácilmente generalizable a varias dimensiones. Además, no cumple la propiedad simétrica. El término margen de seguridad puede aplicarse a distintas situaciones: distancia de una magnitud calculada a un límite regulador (margen de licencia); distancia del valor real de la magnitud a su valor calculado (margen analítico); distancia desde un límite regulador hasta el valor umbral de daño a una barrera (margen de barrera). Esta idea de representar distancias (en el recorrido de magnitudes de seguridad) mediante probabilidades puede aplicarse al estudio del conservadurismo. El margen analítico puede interpretarse como el grado de conservadurismo (GC) de la metodología de cálculo. Utilizando la probabilidad, se puede cuantificar el conservadurismo de límites de tolerancia de una magnitud, y se pueden establecer indicadores de conservadurismo que sirvan para comparar diferentes métodos de construcción de límites y regiones de tolerancia. Un tópico que nunca se abordado de manera rigurosa es el de la validación de metodologías BEPU. Como cualquier otro instrumento de cálculo, una metodología, antes de poder aplicarse a análisis de licencia, tiene que validarse, mediante la comparación entre sus predicciones y valores reales de las magnitudes de seguridad. Tal comparación sólo puede hacerse en escenarios de accidente para los que existan valores medidos de las magnitudes de seguridad, y eso ocurre, básicamente en instalaciones experimentales. El objetivo último del establecimiento de los CRA consiste en verificar que se cumplen para los valores reales de las magnitudes de seguridad, y no sólo para sus valores calculados. En la tesis se demuestra que una condición suficiente para este objetivo último es la conjunción del cumplimiento de 2 criterios: el CRA BEPU de licencia y un criterio análogo, pero aplicado a validación. Y el criterio de validación debe demostrarse en escenarios experimentales y extrapolarse a plantas nucleares. El criterio de licencia exige un valor mínimo (P0) del margen probabilista de licencia; el criterio de validación exige un valor mínimo del margen analítico (el GC). Esos niveles mínimos son básicamente complementarios; cuanto mayor uno, menor el otro. La práctica reguladora actual impone un valor alto al margen de licencia, y eso supone que el GC exigido es pequeño. Adoptar valores menores para P0 supone menor exigencia sobre el cumplimiento del CRA, y, en cambio, más exigencia sobre el GC de la metodología. Y es importante destacar que cuanto mayor sea el valor mínimo del margen (de licencia o analítico) mayor es el coste computacional para demostrarlo. Así que los esfuerzos computacionales también son complementarios: si uno de los niveles es alto (lo que aumenta la exigencia en el cumplimiento del criterio) aumenta el coste computacional. Si se adopta un valor medio de P0, el GC exigido también es medio, con lo que la metodología no tiene que ser muy conservadora, y el coste computacional total (licencia más validación) puede optimizarse. ABSTRACT Deterministic Safety Analysis (DSA) is the procedure used in the design of safety-related systems, structures and components of nuclear power plants (NPPs). DSA is based on computational simulations of a set of hypothetical accidents of the plant, named Design Basis Scenarios (DBS). Nuclear regulatory authorities require the calculation of a set of safety magnitudes, and define the regulatory acceptance criteria (RAC) that must be fulfilled by them. Methodologies for performing DSA van be categorized as conservative or realistic. Conservative methodologies make use of pessimistic model and assumptions, and are relatively simple. They do not need an uncertainty analysis of their results. Realistic methodologies are based on realistic (usually mechanistic) predictive models and assumptions, and need to be supplemented with uncertainty analyses of their results. They are also termed BEPU (“Best Estimate Plus Uncertainty”) methodologies, and are typically based on a probabilistic representation of the uncertainty. For conservative methodologies, the RAC are simply the restriction of calculated values of safety magnitudes to “acceptance regions” defined on their range. For BEPU methodologies, the RAC cannot be so simple, because the safety magnitudes are now uncertain. In the present Thesis, the inclusion of uncertainty in RAC is studied. Basically, the restriction to the acceptance region must be fulfilled “with a high certainty level”. Specifically, a high probability of fulfillment is required. The calculation uncertainty of the magnitudes is considered as propagated from inputs through the predictive model. Uncertain inputs include model empirical parameters, which store the uncertainty due to the model imperfection. The fulfillment of the RAC is required with a probability not less than a value P0 close to 1 and defined by the regulator (probability or coverage level). Calculation uncertainty is not the only one involved. Even if a model (i.e. the basic equations) is perfectly known, the input-output mapping produced by the model is imperfectly known (unless the model is very simple). This ignorance is called epistemic uncertainty, and it is associated to the process of propagation). In fact, it is propagated to the probability of fulfilling the RAC. Another term used on the Thesis for this epistemic uncertainty is metauncertainty. The RAC must include the two types of uncertainty: one for the calculation of the magnitude (aleatory uncertainty); the other one, for the calculation of the probability (epistemic uncertainty). The two uncertainties can be taken into account in a separate fashion, or can be combined. In any case the RAC becomes a probabilistic criterion. If uncertainties are separated, a second-order probability is used; of both are combined, a single probability is used. On the first case, the regulator must define a level of fulfillment for the epistemic uncertainty, termed regulatory confidence level, as a value close to 1. The pair of regulatory levels (probability and confidence) is termed the regulatory tolerance level. The Thesis concludes that the adequate way of setting the BEPU RAC is by separating the uncertainties. There are two reasons to do so: experts recommend the separation of aleatory and epistemic uncertainty; and the separated RAC is in general more conservative than the joint RAC. The BEPU RAC is a hypothesis on a probability distribution, and must be statistically tested. The Thesis classifies the statistical methods to verify the RAC fulfillment in 3 categories: methods based on tolerance regions, in quantile estimators and on probability (of success or failure) estimators. The former two have been termed Q-methods, whereas those in the third category are termed P-methods. The purpose of our categorization is not to make an exhaustive survey of the very numerous existing methods. Rather, the goal is to relate the three categories and examine the most used methods from a regulatory standpoint. Special mention deserves the most used method, due to Wilks, and its extension to multidimensional variables (due to Wald). The counterpart P-method of Wilks’ is Clopper-Pearson interval, typically ignored in the BEPU realm. The problem of the computational cost of an uncertainty analysis is tackled. Wilks’, Wald’s and Clopper-Pearson methods require a minimum sample size, which is a growing function of the tolerance level. The sample size is an indicator of the computational cost, because each element of the sample must be calculated with the predictive models (codes). When the RAC is a multiple criteria, the safety magnitude becomes multidimensional. When all its components are output of the same calculation, the multidimensional character does not introduce additional computational cost. In this way, an extended idea in the BEPU realm, stating that the multi-D problem can only be tackled with the Wald extension, is proven to be false. When the components of the magnitude are independently calculated, the influence of the problem dimension on the cost cannot be avoided. The former BEPU methodologies performed the uncertainty propagation through a surrogate model of the code, also termed emulator or metamodel. The goal of a metamodel is not the predictive capability, clearly worse to the original code, but the capacity to propagate uncertainties with a lower computational cost. The emulator must contain the input parameters contributing the most to the output uncertainty, and this requires a previous importance analysis. The surrogate model is practically inexpensive to run, so that it can be exhaustively analyzed through Monte Carlo. Therefore, the epistemic uncertainty due to sampling will be reduced to almost zero, and the BEPU RAC for metamodels includes a simple probability. The regulatory authority will tend to accept the use of statistical methods which need a minimum of assumptions: exact, nonparametric and frequentist methods rather than approximate, parametric and bayesian methods, respectively. The BEPU RAC is based on a second-order probability. The probability of the safety magnitudes being inside the acceptance region is a success probability and can be interpreted as a fulfillment degree if the RAC. Furthermore, it has a metric interpretation, as a distance (in the range of magnitudes) from calculated values of the magnitudes to acceptance regulatory limits. A probabilistic definition of safety margin (SM) is proposed in the thesis. The same from a value A to other value B of a safety magnitude is defined as the probability that A is less severe than B, obtained from the uncertainties if A and B. The probabilistic definition of SM has several advantages: it is nondimensional, ranges in the interval (0,1) and can be easily generalized to multiple dimensions. Furthermore, probabilistic SM are combined according to the probability laws. And a basic property: probabilistic SM are not symmetric. There are several types of SM: distance from a calculated value to a regulatory limit (licensing margin); or from the real value to the calculated value of a magnitude (analytical margin); or from the regulatory limit to the damage threshold (barrier margin). These representations of distances (in the magnitudes’ range) as probabilities can be applied to the quantification of conservativeness. Analytical margins can be interpreted as the degree of conservativeness (DG) of the computational methodology. Conservativeness indicators are established in the Thesis, useful in the comparison of different methods of constructing tolerance limits and regions. There is a topic which has not been rigorously tackled to the date: the validation of BEPU methodologies. Before being applied in licensing, methodologies must be validated, on the basis of comparisons of their predictions ad real values of the safety magnitudes. Real data are obtained, basically, in experimental facilities. The ultimate goal of establishing RAC is to verify that real values (aside from calculated values) fulfill them. In the Thesis it is proved that a sufficient condition for this goal is the conjunction of 2 criteria: the BEPU RAC and an analogous criterion for validation. And this las criterion must be proved in experimental scenarios and extrapolated to NPPs. The licensing RAC requires a minimum value (P0) of the probabilistic licensing margin; the validation criterion requires a minimum value of the analytical margin (i.e., of the DG). These minimum values are basically complementary; the higher one of them, the lower the other one. The regulatory practice sets a high value on the licensing margin, so that the required DG is low. The possible adoption of lower values for P0 would imply weaker exigence on the RCA fulfillment and, on the other hand, higher exigence on the conservativeness of the methodology. It is important to highlight that a higher minimum value of the licensing or analytical margin requires a higher computational cost. Therefore, the computational efforts are also complementary. If medium levels are adopted, the required DG is also medium, and the methodology does not need to be very conservative. The total computational effort (licensing plus validation) could be optimized.
Resumo:
El accidente de pérdida de refrigerante (LOCA) en un reactor nuclear es uno de los accidentes Base de Diseño más preocupantes y estudiados desde el origen del uso de la tecnología de fisión en la industria productora de energía. El LOCA ocupa, desde el punto de vista de los análisis de seguridad, un lugar de vanguardia tanto en el análisis determinista (DSA) como probabilista (PSA), cuya diferenciada perspectiva ha ido evolucionando notablemente en lo que al crédito a la actuación de las salvaguardias y las acciones del operador se refiere. En la presente tesis se aborda el análisis sistemático de de las secuencias de LOCA por pequeña y mediana rotura en diferentes lugares de un reactor nuclear de agua a presión (PWR) con fallo total de Inyección de Seguridad de Alta Presión (HPSI). Tal análisis ha sido desarrollado en base a la metodología de Análisis Integrado de Seguridad (ISA), desarrollado por el Consejo de Seguridad Nuclear (CSN) y consistente en la aplicación de métodos avanzados de simulación y PSA para la obtención de Dominios de Daño, que cuantifican topológicamente las probabilidades de éxito y daño en función de determinados parámetros inciertos. Para la elaboración de la presente tesis, se ha hecho uso del código termohidráulico TRACE v5.0 (patch 2), avalado por la NRC de los EEUU como código de planta para la simulación y análisis de secuencias en reactores de agua ligera (LWR). Los objetivos del trabajo son, principalmente: (1) el análisis exhaustivo de las secuencias de LOCA por pequeña-mediana rotura en diferentes lugares de un PWR de tres lazos de diseño Westinghouse (CN Almaraz), con fallo de HPSI, en función de parámetros de gran importancia para los transitorios, tales como el tamaño de rotura y el tiempo de retraso en la respuesta del operador; (2) la obtención y análisis de los Dominios de Daño para transitorios de LOCA en PWRs, de acuerdo con la metodología ISA; y (3) la revisión de algunos de los resultados genéricos de los análisis de seguridad para secuencias de LOCA en las mencionadas condiciones. Los resultados de la tesis abarcan tres áreas bien diferenciadas a lo largo del trabajo: (a) la fenomenología física de las secuencias objeto de estudio; (b) las conclusiones de los análisis de seguridad practicados a los transitorios de LOCA; y (c) la relevancia de las consecuencias de las acciones humanas por parte del grupo de operación. Estos resultados, a su vez, son de dos tipos fundamentales: (1) de respaldo del conocimiento previo sobre el tipo de secuencias analizado, incluido en la extensa bibliografía examinada; y (2) hallazgos en cada una de las tres áreas mencionadas, no referidos en la bibliografía. En resumidas cuentas, los resultados de la tesis avalan el uso de la metodología ISA como método de análisis alternativo y sistemático para secuencias accidentales en LWRs. ABSTRACT The loss of coolant accident (LOCA) in nuclear reactors is one of the most concerning and analized accidents from the beginning of the use of fission technology for electric power production. From the point of view of safety analyses, LOCA holds a forefront place in both Deterministic (DSA) and Probabilistic Safety Analysis (PSA), which have significantly evolved from their original state in both safeguard performance credibility and human actuation. This thesis addresses a systematic analysis of small and medium LOCA sequences, in different places of a nuclear Pressurized Water Reactor (PWR) and with total failure of High Pressure Safety Injection (HPSI). Such an analysis has been grounded on the Integrated Safety Assessment (ISA) methodology, developed by the Spanish Nuclear Regulatory Body (CSN). ISA involves the application of advanced methods of simulation and PSA for obtaining Damage Domains that topologically quantify the likelihood of success and damage regarding certain uncertain parameters.TRACE v5.0 (patch 2) code has been used as the thermalhydraulic simulation tool for the elaboration of this work. Nowadays, TRACE is supported by the US NRC as a plant code for the simulation and analysis of sequences in light water reactors (LWR). The main objectives of the work are the following ones: (1) the in-depth analysis of small and medium LOCA sequences in different places of a Westinghouse three-loop PWR (Almaraz NPP), with failed HPSI, regarding important parameters, such as break size or delay in operator response; (2) obtainment and analysis of Damage Domains related to LOCA transients in PWRs, according to ISA methodology; and (3) review some of the results of generic safety analyses for LOCA sequences in those conditions. The results of the thesis cover three separated areas: (a) the physical phenomenology of the sequences under study; (b) the conclusions of LOCA safety analyses; and (c) the importance of consequences of human actions by the operating crew. These results, in turn, are of two main types: (1) endorsement of previous knowledge about this kind of sequences, which is included in the literature; and (2) findings in each of the three aforementioned areas, not reported in the reviewed literature. In short, the results of this thesis support the use of ISA-like methodology as an alternative method for systematic analysis of LWR accidental sequences.