155 resultados para fukushima
Resumo:
Tras el accidente de Fukushima, se han puesto en marcha una serie de actividades de caracterización y descontaminación de las zonas contaminadas alrededor de la central, destinadas a facilitar su habitabilidad. A partir de la experiencia de Japón, que el ponente conoce de cerca como miembro de la red europea NERIS-TP y tras participar en una reciente misión a Fukushima, en este curso se abordaron las técnicas más actuales que se utilizan para diseñar y proponer estrategias de cara a la recuperación de grades superficies contaminadas.
Resumo:
Tras el accidente nuclear de Fukushima se demostró que las piscinas de combustible gastado en las centrales nucleares ven comprometida su refrigeración a largo plazo en caso de una pérdida total de energía eléctrica (SBO), ya que si experimentan un SBO de larga duración no existen a priori sistemas para mantener la refrigeración de los elementos combustibles que no dependan de los diésel de emergencia o de la red externa. En este trabajo se ha estudiado la refrigeración de una piscina de combustible gastado con el programa CFD STAR-CCM+, tanto en condiciones normales como en caso de pérdida del sistema de refrigeración. Posteriormente se ha evaluado la misma mediante el empleo de sistemas pasivos que permiten refrigerar los elementos combustibles durante cierto tiempo tras la pérdida del sistema de refrigeración y de una manera pasiva. De esta manera se consigue cierto margen antes de la entrada en ebullición del agua de la piscina, mejorándose por tanto la refrigeración de la misma. ABSTRACT. After the Fukushima nuclear accident, it was proved that the cooling of the current spent fuel pools are not sure for long term in case of a Station Blackout (SBO) Accident. If a long lasting blackout SBO occurs there are no systems available to keep cooling the spent fuel assemblies that do not rely on diesel generators or the external grid. During this thesis, the author has studied the spent fuel pool cooling, in ordinary conditions and if the spent fuel pool loses its cooling system, using the CFD program STAR-CCM+. Afterwards, the spent fuel pool cooling has been studied through the use of passive systems. Those two systems are able to cool the spent fuel assemblies in a passive way during a certain period of time after losing the cooling system. As a consequence, the pool´s water would boil later and the spent fuel pools safety would be enhanced.
Resumo:
The integrated Safety Assessment (ISA) methodology, developed by the Spanish Nuclear Safety Council (CSN), has been applied to a thermal-hydraulic analysis of PWR Station Blackout (SBO) sequences in the context of the IDPSA (Integrated Deterministic-Probabilistic Safety Assessment) network objectives. The ISA methodology allows obtaining the damage domain (the region of the uncertain parameters space where the damage limit is exceeded) for each sequence of interest as a function of the operator actuations times. Given a particular safety limit or damage limit, several data of every sequence are necessary in order to obtain the exceedance frequency of that limit. In this application these data are obtained from the results of the simulations performed with MAAP code transients inside each damage domain and the time-density probability distributions of the manual actions. Damage limits that have been taken into account within this analysis are: local cladding damage (PCT>1477 K); local fuel melting (T>2499 K); fuel relocation in lower plenum and vessel failure. Therefore, to every one of these damage variables corresponds a different damage domain. The operation of the new passive thermal shutdown seals developed by several companies since Fukushima accident is considered in the paper. The results show the capability and necessity of the ISA methodology, or similar, in order to obtain accurate results that take into account time uncertainties.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
Although extracellular application of lysophosphatidic acid (LPA) has been extensively documented to produce a variety of cellular responses through a family of specific G protein-coupled receptors, the in vivo organismal role of LPA signaling remains largely unknown. The first identified LPA receptor gene, lpA1/vzg-1/edg-2, was previously shown to have remarkably enriched embryonic expression in the cerebral cortex and dorsal olfactory bulb and postnatal expression in myelinating glia including Schwann cells. Here, we show that targeted deletion of lpA1 results in approximately 50% neonatal lethality, impaired suckling in neonatal pups, and loss of LPA responsivity in embryonic cerebral cortical neuroblasts with survivors showing reduced size, craniofacial dysmorphism, and increased apoptosis in sciatic nerve Schwann cells. The suckling defect was responsible for the death among lpA1(−/−) neonates and the stunted growth of survivors. Impaired suckling behavior was attributable to defective olfaction, which is likely related to developmental abnormalities in olfactory bulb and/or cerebral cortex. Our results provide evidence that endogenous lysophospholipid signaling requires an lp receptor gene and indicate that LPA signaling through the LPA1 receptor is required for normal development of an inborn, neonatal behavior.
Resumo:
Extracellular lysophosphatidic acid (LPA) produces diverse cellular responses in many cell types. Recent reports of several molecularly distinct G protein-coupled receptors have raised the possibility that the responses to LPA stimulation could be mediated by the combination of several uni-functional receptors. To address this issue, we analyzed one receptor encoded by ventricular zone gene-1 (vzg-1) (also referred to as lpA1/edg-2) by using heterologous expression in a neuronal and nonneuronal cell line. VZG-1 expression was necessary and sufficient in mediating multiple effects of LPA: [3H]-LPA binding, G protein activation, stress fiber formation, neurite retraction, serum response element activation, and increased DNA synthesis. These results demonstrate that a single receptor, encoded by vzg-1, can activate multiple LPA-dependent responses in cells from distinct tissue lineages.
Resumo:
A recombinant Mycobacterium bovis bacillus Calmette-Guérin (BCG) vector-based vaccine that secretes the V3 principal neutralizing epitope of human immunodeficiency virus (HIV) could induce immune response to the epitope and prevent the viral infection. By using the Japanese consensus sequence of HIV-1, we successfully constructed chimeric protein secretion vectors by selecting an appropriate insertion site of a carrier protein and established the principal neutralizing determinant (PND)-peptide secretion system in BCG. The recombinant BCG (rBCG)-inoculated guinea pigs were initially screened by delayed-type hypersensitivity (DTH) skin reactions to the PND peptide, followed by passive transfer of the DTH by the systemic route. Further, immunization of mice with the rBCG resulted in induction of cytotoxic T lymphocytes. The guinea pig immune antisera showed elevated titers to the PND peptide and neutralized HIVMN, and administration of serum IgG from the vaccinated guinea pigs was effective in completely blocking the HIV infection in thymus/liver transplanted severe combined immunodeficiency (SCID)/hu or SCID/PBL mice. In addition, the immune serum IgG was shown to neutralize primary field isolates of HIV that match the neutralizing sequence motif by a peripheral blood mononuclear cell-based virus neutralization assay. The data support the idea that the antigen-secreting rBCG system can be used as a tool for development of HIV vaccines.
Resumo:
Técnicas analíticas empregadas para a quantificação do teor de lignina em plantas forrageiras, atualmente em uso, são questionáveis quanto às suas acurácias. O método lignina detergente ácido (LDA), que é um dos métodos mais utilizado em Ciência Animal e Agronomia, apresenta algumas falhas, particularmente devido à parcial solubilização da lignina durante a preparação da fibra em detergente ácido (FDA). A lignina Klason (LK), outro método muito usado, apresenta o inconveniente de mensurar a proteína da parede celular como sendo lignina. Em ambos os procedimentos recomenda-se também mensurar cinzas nos resíduos de lignina. A quantificação da concentração de lignina pelo método espectrofotométrico lignina brometo de acetila (LBA) vem ganhando interesse de pesquisadores no Brasil e no exterior. Nesta metodologia, a lignina da planta contida na preparação parede celular (PC) é solubilizada numa solução a 25% de brometo de acetila em ácido acético e a absorbância mensurada é com luz UV a 280 nm. O valor da absorbância é inserido numa equação de regressão e a concentração de lignina é obtida. Para que esta técnica analítica seja mais aceita pelos pesquisadores, ela deve ser, obviamente, convincente e atrativa. O presente trabalho analisou alguns parâmetros relacionados à LBA em 7 gramíneas e 6 leguminosas, em dois estádios de maturidade. Dentre as diferentes temperaturas de pré-secagem, os resultados indicaram que os procedimentos de 55°C com ventilação e liofilização podem ser utilizados com a mesma eficácia. As temperaturas de 55°C sem ventilação e 80°C sem ventilação não são recomendadas, pois aumentaram os valores de FDA e LDA, possivelmente devido ao surgimento de artefatos de técnica como os compostos de Maillard. No método LBA os valores menores das amostras de leguminosas chamaram a atenção e colocaram em questão se a lignina destas plantas seria menos solúvel no reagente brometo de acetila. Dentre algumas alterações na metodologia da técnica LBA, a utilização do moinho de bolas (para diminuir o tamanho particular) nas amostras de PC não mostrou efeito; a hipótese era melhorar a solubilização da lignina usando partículas menores. O uso de um ultrasonicador, que aumenta a vibração das moléculas e assim, facilitaria a solubilização da lignina no reagente brometo de acetila, melhorou a solubilização da lignina em cerca de 10%, tanto nas gramíneas como nas leguminosas. Foi acoplado um ensaio biológico como referência, a degradabilidade in vitro da matéria seca (DIVMS); e como a lignina está intimamente associada à estrutura fibrosa da parede celular, também foi feito um ensaio de degradabilidade in vitro da fibra em detergente neutro (DIVFDN). Os resultados confirmaram o efeito da maturidade, reduzindo a degradabilidade nas plantas mais maduras, e que o teor de lignina de leguminosas é realmente inferior ao de gramíneas. Os resultados de degradabilidade apresentaram coeficientes de correlação mais elevados com o método LBA, quando foi empregada a técnica do ultrasom; o método LK mostrou os menores coeficientes. Também testou-se, com sucesso, a utilização da FDN, como preparação fibrosa, ao invés de PC. A razão é simples: enquanto que a FDN é amplamente conhecida, a preparação PC não o é. Inquestionável que esta manobra facilitará substancialmente a divulgação desse método, tornando-a mais aceitável pela comunidade científica
Resumo:
Las enseñanzas que podemos extraer del accidente nuclear de Fukushima en 2011 parece que son las mismas que en Chernóbil veinticinco años antes, pese a ser entornos políticos diferentes. Por lo visto no hemos aprendido mucho.
Resumo:
Initiated in May 2011, several months after the Fukushima nuclear disaster, Germany’s energy transformation (Energiewende) has been presented as an irrevocable plan, and – due to the speed of change required – it represents a new quality in Germany’s energy strategy. Its main objectives include: nuclear energy being phased out by 2022, the development of renewable energy sources (OZE), the expansion of transmission networks, the construction of new conventional power plants and an improvement in energy efficiency.The cornerstone of the strategy is the development of renewable energy. Under Germany's amended renewable energy law, the proportion of renewable energy in electricity generation is supposed to increase steadily from the current level of around 20% to approximately 38% in 2020. In 2030, renewable energy is expected to account for 50% of electricity generation. This is expected to increase to 65% in 2040 and to as much as 80% in 2050. The impact of the Energiewende is not limited to the sphere of energy supplies. In the medium and long term, it will change not only to the way the German economy operates, but also the functioning of German society and the state. Facing difficulties with the expansion of transmission networks, the excessive cost of building wind farms, and problems with the stability of electricity supplies, especially during particularly cold winters, the federal government has so far tended to centralise power and limit the independence of the German federal states with regard to their respective energy policies, justifying this with the need for greater co-ordination. The Energiewende may also become the beginning of a "third industrial revolution", i.e. a transition to a green economy and a society based on sustainable development. This will require a new "social contract" that will redefine the relations between the state, society and the economy. Negotiating such a contract will be one of the greatest challenges for German policy in the coming years.
Resumo:
One year after the events of Fukushima the implementation of the new German energy strategy adopted in the summer of 2011 is being verified. Business circles, experts and publicists are sounding the alarm. The tempo at which the German economy is being rearranged in order that it uses renewable energy sources is so that it has turned out to be an extremely difficult and expensive task. The implementation of the key guidelines of the new strategy, such as the development of the transmission networks and the construction of new conventional power plants, is meeting increasing resistance in the form of economic and legal difficulties. The development of the green technologies sector is also posing problems. The solar energy industry, for example, is excessively subsidised, whereas the subsidies for the construction of maritime wind farms are too low. At present, only those guidelines of the strategy which are evaluated as economically feasible by investors or which receive adequate financial support from the state have a chance of being carried through. The strategy may also turn out to be unsuccessful due to the lack of a comprehensive coordination of its implementation and the financial burden its introduction entails for both the public and the economy. In the immediate future, the German government will make efforts not only to revise its internal regulations in order to enable the realisation of the energy transformation; it is also likely to undertake a number of measures at the EU forum which will facilitate this realisation. One should expect that the German government will actively support the financing of both the development of the energy networks in EU member states and the development of renewable energy sources in the energy sector.
Resumo:
Policies and politics are an integral part of socio-technical transitions but have not received much attention in the transitions literature so far. Drawing on the advocacy coalition framework, our paper addresses this gap with a study on actors and coalitions in Swiss energy policy. Our results show that advocacy coalitions in Switzerland have largely remained stable despite the Fukushima shock. However, heterogeneity of beliefs has increased and in 2013, even a majority of actors expressed their support for the energy transition – an indication that major policy change might be ahead. It seems that in socio-technical transitions, changes in the policy issue and in the actor base also work toward policy change, next to changes in core beliefs. We make suggestions how the advocacy coalition framework can inform analysis and theory building in transition studies. We also present first ideas about the interplay of socio-technical systems and policy systems.
Resumo:
Machine learning techniques for prediction and rule extraction from artificial neural network methods are used. The hypothesis that market sentiment and IPO specific attributes are equally responsible for first-day IPO returns in the US stock market is tested. Machine learning methods used are Bayesian classifications, support vector machines, decision tree techniques, rule learners and artificial neural networks. The outcomes of the research are predictions and rules associated With first-day returns of technology IPOs. The hypothesis that first-day returns of technology IPOs are equally determined by IPO specific and market sentiment is rejected. Instead lower yielding IPOs are determined by IPO specific and market sentiment attributes, while higher yielding IPOs are largely dependent on IPO specific attributes.
Resumo:
The oculomotor synergy as expressed by the CA/C and AC/A ratios was investigated to examine its influence on our previous observation that whereas convergence responses to stereoscopic images are generally stable, some individuals exhibit significant accommodative overshoot. Using a modified video refraction unit while viewing a stereoscopic LCD, accommodative and convergence responses to balanced and unbalanced vergence and focal stimuli (BVFS and UBVFS) were measured. Accommodative overshoot of at least 0.3 D was found in 3 out of 8 subjects for UBVFS. The accommodative response differential (RD) was taken to be the difference between the initial response and the subsequent mean static steady-state response. Without overshoot, RD was quantified by finding the initial response component. A mean RD of 0.11 +/- 0.27 D was found for the 1.0 D step UBVFS condition. The mean RD for the BVFS was 0.00 +/- 0.17 D. There was a significant positive correlation between CA/C ratio and RD (r = +0.75, n = 8, p <0.05) for only UBVFS. We propose that inter-subject variation in RD is influenced by the CA/C ratio as follows: an initial convergence response, induced by disparity of the image, generates convergence-driven accommodation commensurate with the CA/C ratio; the associated transient defocus subsequently decays to a balanced position between defocus-induced and convergence-induced accommodations.