951 resultados para Modeling. Simulation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mathematical model for the group combustion of pulverized coal particles was developed in a previous work. It includes the Lagrangian description of the dehumidification, devolatilization and char gasification reactions of the coal particles in the homogenized gaseous environment resulting from the three fuels, CO, H2 and volatiles, supplied by the gasification of the particles and their simultaneous group combustion by the gas phase oxidation reactions, which are considered to be very fast. This model is complemented here with an analysis of the particle dynamics, determined principally by the effects of aerodynamic drag and gravity, and its dispersion based on a stochastic model. It is also extended to include two other simpler models for the gasification of the particles: the first one for particles small enough to extinguish the surrounding diffusion flames, and a second one for particles with small ash content when the porous shell of ashes remaining after gasification of the char, non structurally stable, is disrupted. As an example of the applicability of the models, they are used in the numerical simulation of an experiment of a non-swirling pulverized coal jet with a nearly stagnant air at ambient temperature, with an initial region of interaction with a small annular methane flame. Computational algorithms for solving the different stages undergone by a coal particle during its combustion are proposed. For the partial differential equations modeling the gas phase, a second order finite element method combined with a semi-Lagrangian characteristics method are used. The results obtained with the three versions of the model are compared among them and show how the first of the simpler models fits better the experimental results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dentro de los materiales estructurales, el magnesio y sus aleaciones están siendo el foco de una de profunda investigación. Esta investigación está dirigida a comprender la relación existente entre la microestructura de las aleaciones de Mg y su comportamiento mecánico. El objetivo es optimizar las aleaciones actuales de magnesio a partir de su microestructura y diseñar nuevas aleaciones. Sin embargo, el efecto de los factores microestructurales (como la forma, el tamaño, la orientación de los precipitados y la morfología de los granos) en el comportamiento mecánico de estas aleaciones está todavía por descubrir. Para conocer mejor de la relación entre la microestructura y el comportamiento mecánico, es necesaria la combinación de técnicas avanzadas de caracterización experimental como de simulación numérica, a diferentes longitudes de escala. En lo que respecta a las técnicas de simulación numérica, la homogeneización policristalina es una herramienta muy útil para predecir la respuesta macroscópica a partir de la microestructura de un policristal (caracterizada por el tamaño, la forma y la distribución de orientaciones de los granos) y el comportamiento del monocristal. La descripción de la microestructura se lleva a cabo mediante modernas técnicas de caracterización (difracción de rayos X, difracción de electrones retrodispersados, así como con microscopia óptica y electrónica). Sin embargo, el comportamiento del cristal sigue siendo difícil de medir, especialmente en aleaciones de Mg, donde es muy complicado conocer el valor de los parámetros que controlan el comportamiento mecánico de los diferentes modos de deslizamiento y maclado. En la presente tesis se ha desarrollado una estrategia de homogeneización computacional para predecir el comportamiento de aleaciones de magnesio. El comportamiento de los policristales ha sido obtenido mediante la simulación por elementos finitos de un volumen representativo (RVE) de la microestructura, considerando la distribución real de formas y orientaciones de los granos. El comportamiento del cristal se ha simulado mediante un modelo de plasticidad cristalina que tiene en cuenta los diferentes mecanismos físicos de deformación, como el deslizamiento y el maclado. Finalmente, la obtención de los parámetros que controlan el comportamiento del cristal (tensiones críticas resueltas (CRSS) así como las tasas de endurecimiento para todos los modos de maclado y deslizamiento) se ha resuelto mediante la implementación de una metodología de optimización inversa, una de las principales aportaciones originales de este trabajo. La metodología inversa pretende, por medio del algoritmo de optimización de Levenberg-Marquardt, obtener el conjunto de parámetros que definen el comportamiento del monocristal y que mejor ajustan a un conjunto de ensayos macroscópicos independientes. Además de la implementación de la técnica, se han estudiado tanto la objetividad del metodología como la unicidad de la solución en función de la información experimental. La estrategia de optimización inversa se usó inicialmente para obtener el comportamiento cristalino de la aleación AZ31 de Mg, obtenida por laminado. Esta aleación tiene una marcada textura basal y una gran anisotropía plástica. El comportamiento de cada grano incluyó cuatro mecanismos de deformación diferentes: deslizamiento en los planos basal, prismático, piramidal hc+ai, junto con el maclado en tracción. La validez de los parámetros resultantes se validó mediante la capacidad del modelo policristalino para predecir ensayos macroscópicos independientes en diferentes direcciones. En segundo lugar se estudió mediante la misma estrategia, la influencia del contenido de Neodimio (Nd) en las propiedades de una aleación de Mg-Mn-Nd, obtenida por extrusión. Se encontró que la adición de Nd produce una progresiva isotropización del comportamiento macroscópico. El modelo mostró que este incremento de la isotropía macroscópica era debido tanto a la aleatoriedad de la textura inicial como al incremento de la isotropía del comportamiento del cristal, con valores similares de las CRSSs de los diferentes modos de deformación. Finalmente, el modelo se empleó para analizar el efecto de la temperatura en el comportamiento del cristal de la aleación de Mg-Mn-Nd. La introducción en el modelo de los efectos non-Schmid sobre el modo de deslizamiento piramidal hc+ai permitió capturar el comportamiento mecánico a temperaturas superiores a 150_C. Esta es la primera vez, de acuerdo con el conocimiento del autor, que los efectos non-Schmid han sido observados en una aleación de Magnesio. The study of Magnesium and its alloys is a hot research topic in structural materials. In particular, special attention is being paid in understanding the relationship between microstructure and mechanical behavior in order to optimize the current alloy microstructures and guide the design of new alloys. However, the particular effect of several microstructural factors (precipitate shape, size and orientation, grain morphology distribution, etc.) in the mechanical performance of a Mg alloy is still under study. The combination of advanced characterization techniques and modeling at several length scales is necessary to improve the understanding of the relation microstructure and mechanical behavior. Respect to the simulation techniques, polycrystalline homogenization is a very useful tool to predict the macroscopic response from polycrystalline microstructure (grain size, shape and orientation distributions) and crystal behavior. The microstructure description is fully covered with modern characterization techniques (X-ray diffraction, EBSD, optical and electronic microscopy). However, the mechanical behaviour of single crystals is not well-known, especially in Mg alloys where the correct parameterization of the mechanical behavior of the different slip/twin modes is a very difficult task. A computational homogenization framework for predicting the behavior of Magnesium alloys has been developed in this thesis. The polycrystalline behavior was obtained by means of the finite element simulation of a representative volume element (RVE) of the microstructure including the actual grain shape and orientation distributions. The crystal behavior for the grains was accounted for a crystal plasticity model which took into account the physical deformation mechanisms, e.g. slip and twinning. Finally, the problem of the parametrization of the crystal behavior (critical resolved shear stresses (CRSS) and strain hardening rates of all the slip and twinning modes) was obtained by the development of an inverse optimization methodology, one of the main original contributions of this thesis. The inverse methodology aims at finding, by means of the Levenberg-Marquardt optimization algorithm, the set of parameters defining crystal behavior that best fit a set of independent macroscopic tests. The objectivity of the method and the uniqueness of solution as function of the input information has been numerically studied. The inverse optimization strategy was first used to obtain the crystal behavior of a rolled polycrystalline AZ31 Mg alloy that showed a marked basal texture and a strong plastic anisotropy. Four different deformation mechanisms: basal, prismatic and pyramidal hc+ai slip, together with tensile twinning were included to characterize the single crystal behavior. The validity of the resulting parameters was proved by the ability of the polycrystalline model to predict independent macroscopic tests on different directions. Secondly, the influence of Neodymium (Nd) content on an extruded polycrystalline Mg-Mn-Nd alloy was studied using the same homogenization and optimization framework. The effect of Nd addition was a progressive isotropization of the macroscopic behavior. The model showed that this increase in the macroscopic isotropy was due to a randomization of the initial texture and also to an increase of the crystal behavior isotropy (similar values of the CRSSs of the different modes). Finally, the model was used to analyze the effect of temperature on the crystal behaviour of a Mg-Mn-Nd alloy. The introduction in the model of non-Schmid effects on the pyramidal hc+ai slip allowed to capture the inverse strength differential that appeared, between the tension and compression, above 150_C. This is the first time, to the author's knowledge, that non-Schmid effects have been reported for Mg alloys.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a large number of physical, biological and environmental processes interfaces with high irregular geometry appear separating media (phases) in which the heterogeneity of constituents is present. In this work the quantification of the interplay between irregular structures and surrounding heterogeneous distributions in the plane is made For a geometric set image and a mass distribution (measure) image supported in image, being image, the mass image gives account of the interplay between the geometric structure and the surrounding distribution. A computation method is developed for the estimation and corresponding scaling analysis of image, being image a fractal plane set of Minkowski dimension image and image a multifractal measure produced by random multiplicative cascades. The method is applied to natural and mathematical fractal structures in order to study the influence of both, the irregularity of the geometric structure and the heterogeneity of the distribution, in the scaling of image. Applications to the analysis and modeling of interplay of phases in environmental scenarios are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, function of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells has only very recently been proposed (Jerusalem et al., 2013). In this paper, we present the implementation details of Neurite: the finite difference parallel program used in this reference. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite-explicit and implicit-were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between lectrophysiology and mechanics (Jerusalem et al., 2013). This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented dendritic tree, and a damaged axon. The capabilities of the program to deal with large scale scenarios, segmented neuronal structures, and functional deficits under mechanical loading are specifically highlighted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The usual way of modeling variability using threshold voltage shift and drain current amplification is becoming inaccurate as new sources of variability appear in sub-22nm devices. In this work we apply the four-injector approach for variability modeling to the simulation of SRAMs with predictive technology models from 20nm down to 7nm nodes. We show that the SRAMs, designed following ITRS roadmap, present stability metrics higher by at least 20% compared to a classical variability modeling approach. Speed estimation is also pessimistic, whereas leakage is underestimated if sub-threshold slope and DIBL mismatch and their correlations with threshold voltage are not considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La fiabilidad está pasando a ser el principal problema de los circuitos integrados según la tecnología desciende por debajo de los 22nm. Pequeñas imperfecciones en la fabricación de los dispositivos dan lugar ahora a importantes diferencias aleatorias en sus características eléctricas, que han de ser tenidas en cuenta durante la fase de diseño. Los nuevos procesos y materiales requeridos para la fabricación de dispositivos de dimensiones tan reducidas están dando lugar a diferentes efectos que resultan finalmente en un incremento del consumo estático, o una mayor vulnerabilidad frente a radiación. Las memorias SRAM son ya la parte más vulnerable de un sistema electrónico, no solo por representar más de la mitad del área de los SoCs y microprocesadores actuales, sino también porque las variaciones de proceso les afectan de forma crítica, donde el fallo de una única célula afecta a la memoria entera. Esta tesis aborda los diferentes retos que presenta el diseño de memorias SRAM en las tecnologías más pequeñas. En un escenario de aumento de la variabilidad, se consideran problemas como el consumo de energía, el diseño teniendo en cuenta efectos de la tecnología a bajo nivel o el endurecimiento frente a radiación. En primer lugar, dado el aumento de la variabilidad de los dispositivos pertenecientes a los nodos tecnológicos más pequeños, así como a la aparición de nuevas fuentes de variabilidad por la inclusión de nuevos dispositivos y la reducción de sus dimensiones, la precisión del modelado de dicha variabilidad es crucial. Se propone en la tesis extender el método de inyectores, que modela la variabilidad a nivel de circuito, abstrayendo sus causas físicas, añadiendo dos nuevas fuentes para modelar la pendiente sub-umbral y el DIBL, de creciente importancia en la tecnología FinFET. Los dos nuevos inyectores propuestos incrementan la exactitud de figuras de mérito a diferentes niveles de abstracción del diseño electrónico: a nivel de transistor, de puerta y de circuito. El error cuadrático medio al simular métricas de estabilidad y prestaciones de células SRAM se reduce un mínimo de 1,5 veces y hasta un máximo de 7,5 a la vez que la estimación de la probabilidad de fallo se mejora en varios ordenes de magnitud. El diseño para bajo consumo es una de las principales aplicaciones actuales dada la creciente importancia de los dispositivos móviles dependientes de baterías. Es igualmente necesario debido a las importantes densidades de potencia en los sistemas actuales, con el fin de reducir su disipación térmica y sus consecuencias en cuanto al envejecimiento. El método tradicional de reducir la tensión de alimentación para reducir el consumo es problemático en el caso de las memorias SRAM dado el creciente impacto de la variabilidad a bajas tensiones. Se propone el diseño de una célula que usa valores negativos en la bit-line para reducir los fallos de escritura según se reduce la tensión de alimentación principal. A pesar de usar una segunda fuente de alimentación para la tensión negativa en la bit-line, el diseño propuesto consigue reducir el consumo hasta en un 20 % comparado con una célula convencional. Una nueva métrica, el hold trip point se ha propuesto para prevenir nuevos tipos de fallo debidos al uso de tensiones negativas, así como un método alternativo para estimar la velocidad de lectura, reduciendo el número de simulaciones necesarias. Según continúa la reducción del tamaño de los dispositivos electrónicos, se incluyen nuevos mecanismos que permiten facilitar el proceso de fabricación, o alcanzar las prestaciones requeridas para cada nueva generación tecnológica. Se puede citar como ejemplo el estrés compresivo o extensivo aplicado a los fins en tecnologías FinFET, que altera la movilidad de los transistores fabricados a partir de dichos fins. Los efectos de estos mecanismos dependen mucho del layout, la posición de unos transistores afecta a los transistores colindantes y pudiendo ser el efecto diferente en diferentes tipos de transistores. Se propone el uso de una célula SRAM complementaria que utiliza dispositivos pMOS en los transistores de paso, así reduciendo la longitud de los fins de los transistores nMOS y alargando los de los pMOS, extendiéndolos a las células vecinas y hasta los límites de la matriz de células. Considerando los efectos del STI y estresores de SiGe, el diseño propuesto mejora los dos tipos de transistores, mejorando las prestaciones de la célula SRAM complementaria en más de un 10% para una misma probabilidad de fallo y un mismo consumo estático, sin que se requiera aumentar el área. Finalmente, la radiación ha sido un problema recurrente en la electrónica para aplicaciones espaciales, pero la reducción de las corrientes y tensiones de los dispositivos actuales los está volviendo vulnerables al ruido generado por radiación, incluso a nivel de suelo. Pese a que tecnologías como SOI o FinFET reducen la cantidad de energía colectada por el circuito durante el impacto de una partícula, las importantes variaciones de proceso en los nodos más pequeños va a afectar su inmunidad frente a la radiación. Se demuestra que los errores inducidos por radiación pueden aumentar hasta en un 40 % en el nodo de 7nm cuando se consideran las variaciones de proceso, comparado con el caso nominal. Este incremento es de una magnitud mayor que la mejora obtenida mediante el diseño de células de memoria específicamente endurecidas frente a radiación, sugiriendo que la reducción de la variabilidad representaría una mayor mejora. ABSTRACT Reliability is becoming the main concern on integrated circuit as the technology goes beyond 22nm. Small imperfections in the device manufacturing result now in important random differences of the devices at electrical level which must be dealt with during the design. New processes and materials, required to allow the fabrication of the extremely short devices, are making new effects appear resulting ultimately on increased static power consumption, or higher vulnerability to radiation SRAMs have become the most vulnerable part of electronic systems, not only they account for more than half of the chip area of nowadays SoCs and microprocessors, but they are critical as soon as different variation sources are regarded, with failures in a single cell making the whole memory fail. This thesis addresses the different challenges that SRAM design has in the smallest technologies. In a common scenario of increasing variability, issues like energy consumption, design aware of the technology and radiation hardening are considered. First, given the increasing magnitude of device variability in the smallest nodes, as well as new sources of variability appearing as a consequence of new devices and shortened lengths, an accurate modeling of the variability is crucial. We propose to extend the injectors method that models variability at circuit level, abstracting its physical sources, to better model sub-threshold slope and drain induced barrier lowering that are gaining importance in FinFET technology. The two new proposed injectors bring an increased accuracy of figures of merit at different abstraction levels of electronic design, at transistor, gate and circuit levels. The mean square error estimating performance and stability metrics of SRAM cells is reduced by at least 1.5 and up to 7.5 while the yield estimation is improved by orders of magnitude. Low power design is a major constraint given the high-growing market of mobile devices that run on battery. It is also relevant because of the increased power densities of nowadays systems, in order to reduce the thermal dissipation and its impact on aging. The traditional approach of reducing the voltage to lower the energy consumption if challenging in the case of SRAMs given the increased impact of process variations at low voltage supplies. We propose a cell design that makes use of negative bit-line write-assist to overcome write failures as the main supply voltage is lowered. Despite using a second power source for the negative bit-line, the design achieves an energy reduction up to 20% compared to a conventional cell. A new metric, the hold trip point has been introduced to deal with new sources of failures to cells using a negative bit-line voltage, as well as an alternative method to estimate cell speed, requiring less simulations. With the continuous reduction of device sizes, new mechanisms need to be included to ease the fabrication process and to meet the performance targets of the successive nodes. As example we can consider the compressive or tensile strains included in FinFET technology, that alter the mobility of the transistors made out of the concerned fins. The effects of these mechanisms are very dependent on the layout, with transistor being affected by their neighbors, and different types of transistors being affected in a different way. We propose to use complementary SRAM cells with pMOS pass-gates in order to reduce the fin length of nMOS devices and achieve long uncut fins for the pMOS devices when the cell is included in its corresponding array. Once Shallow Trench isolation and SiGe stressors are considered the proposed design improves both kinds of transistor, boosting the performance of complementary SRAM cells by more than 10% for a same failure probability and static power consumption, with no area overhead. While radiation has been a traditional concern in space electronics, the small currents and voltages used in the latest nodes are making them more vulnerable to radiation-induced transient noise, even at ground level. Even if SOI or FinFET technologies reduce the amount of energy transferred from the striking particle to the circuit, the important process variation that the smallest nodes will present will affect their radiation hardening capabilities. We demonstrate that process variations can increase the radiation-induced error rate by up to 40% in the 7nm node compared to the nominal case. This increase is higher than the improvement achieved by radiation-hardened cells suggesting that the reduction of process variations would bring a higher improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En los últimos años, el Ge ha ganado de nuevo atención con la finalidad de ser integrado en el seno de las existentes tecnologías de microelectrónica. Aunque no se le considera como un canddato capaz de reemplazar completamente al Si en el futuro próximo, probalemente servirá como un excelente complemento para aumentar las propiedades eléctricas en dispositivos futuros, especialmente debido a su alta movilidad de portadores. Esta integración requiere de un avance significativo del estado del arte en los procesos de fabricado. Técnicas de simulación, como los algoritmos de Monte Carlo cinético (KMC), proporcionan un ambiente atractivo para llevar a cabo investigación y desarrollo en este campo, especialmente en términos de costes en tiempo y financiación. En este estudio se han usado, por primera vez, técnicas de KMC con el fin entender el procesado “front-end” de Ge en su fabricación, específicamente la acumulación de dañado y amorfización producidas por implantación iónica y el crecimiento epitaxial en fase sólida (SPER) de las capas amorfizadas. Primero, simulaciones de aproximación de clisiones binarias (BCA) son usadas para calcular el dañado causado por cada ión. La evolución de este dañado en el tiempo se simula usando KMC sin red, o de objetos (OKMC) en el que sólamente se consideran los defectos. El SPER se simula a través de una aproximación KMC de red (LKMC), siendo capaz de seguir la evolución de los átomos de la red que forman la intercara amorfo/cristalina. Con el modelo de amorfización desarrollado a lo largo de este trabajo, implementado en un simulador multi-material, se pueden simular todos estos procesos. Ha sido posible entender la acumulación de dañado, desde la generación de defectos puntuales hasta la formación completa de capas amorfas. Esta acumulación ocurre en tres regímenes bien diferenciados, empezando con un ritmo lento de formación de regiones de dañado, seguido por una rápida relajación local de ciertas áreas en la fase amorfa donde ambas fases, amorfa y cristalina, coexisten, para terminar en la amorfización completa de capas extensas, donde satura el ritmo de acumulación. Dicha transición ocurre cuando la concentración de dañado supera cierto valor límite, el cual es independiente de las condiciones de implantación. Cuando se implantan los iones a temperaturas relativamente altas, el recocido dinámico cura el dañado previamente introducido y se establece una competición entre la generación de dañado y su disolución. Estos efectos se vuelven especialmente importantes para iones ligeros, como el B, el cual crea dañado más diluido, pequeño y distribuido de manera diferente que el causado por la implantación de iones más pesados, como el Ge. Esta descripción reproduce satisfactoriamente la cantidad de dañado y la extensión de las capas amorfas causadas por implantación iónica reportadas en la bibliografía. La velocidad de recristalización de la muestra previamente amorfizada depende fuertemente de la orientación del sustrato. El modelo LKMC presentado ha sido capaz de explicar estas diferencias entre orientaciones a través de un simple modelo, dominado por una única energía de activación y diferentes prefactores en las frecuencias de SPER dependiendo de las configuraciones de vecinos de los átomos que recristalizan. La formación de maclas aparece como una consecuencia de esta descripción, y es predominante en sustratos crecidos en la orientación (111)Ge. Este modelo es capaz de reproducir resultados experimentales para diferentes orientaciones, temperaturas y tiempos de evolución de la intercara amorfo/cristalina reportados por diferentes autores. Las parametrizaciones preliminares realizadas de los tensores de activación de tensiones son también capaces de proveer una buena correlación entre las simulaciones y los resultados experimentales de velocidad de SPER a diferentes temperaturas bajo una presión hidrostática aplicada. Los estudios presentados en esta tesis han ayudado a alcanzar un mejor entendimiento de los mecanismos de producción de dañado, su evolución, amorfización y SPER para Ge, además de servir como una útil herramienta para continuar el trabajo en este campo. In the recent years, Ge has regained attention to be integrated into existing microelectronic technologies. Even though it is not thought to be a feasible full replacement to Si in the near future, it will likely serve as an excellent complement to enhance electrical properties in future devices, specially due to its high carrier mobilities. This integration requires a significant upgrade of the state-of-the-art of regular manufacturing processes. Simulation techniques, such as kinetic Monte Carlo (KMC) algorithms, provide an appealing environment to research and innovation in the field, specially in terms of time and funding costs. In the present study, KMC techniques are used, for the first time, to understand Ge front-end processing, specifically damage accumulation and amorphization produced by ion implantation and Solid Phase Epitaxial Regrowth (SPER) of the amorphized layers. First, Binary Collision Approximation (BCA) simulations are used to calculate the damage caused by every ion. The evolution of this damage over time is simulated using non-lattice, or Object, KMC (OKMC) in which only defects are considered. SPER is simulated through a Lattice KMC (LKMC) approach, being able to follow the evolution of the lattice atoms forming the amorphous/crystalline interface. With the amorphization model developed in this work, implemented into a multi-material process simulator, all these processes can be simulated. It has been possible to understand damage accumulation, from point defect generation up to full amorphous layers formation. This accumulation occurs in three differentiated regimes, starting at a slow formation rate of the damage regions, followed by a fast local relaxation of areas into the amorphous phase where both crystalline and amorphous phases coexist, ending in full amorphization of extended layers, where the accumulation rate saturates. This transition occurs when the damage concentration overcomes a certain threshold value, which is independent of the implantation conditions. When implanting ions at relatively high temperatures, dynamic annealing takes place, healing the previously induced damage and establishing a competition between damage generation and its dissolution. These effects become specially important for light ions, as B, for which the created damage is more diluted, smaller and differently distributed than that caused by implanting heavier ions, as Ge. This description successfully reproduces damage quantity and extension of amorphous layers caused by means of ion implantation reported in the literature. Recrystallization velocity of the previously amorphized sample strongly depends on the substrate orientation. The presented LKMC model has been able to explain these differences between orientations through a simple model, dominated by one only activation energy and different prefactors for the SPER rates depending on the neighboring configuration of the recrystallizing atoms. Twin defects formation appears as a consequence of this description, and are predominant for (111)Ge oriented grown substrates. This model is able to reproduce experimental results for different orientations, temperatures and times of evolution of the amorphous/crystalline interface reported by different authors. Preliminary parameterizations for the activation strain tensors are able to also provide a good match between simulations and reported experimental results for SPER velocities at different temperatures under the appliance of hydrostatic pressure. The studies presented in this thesis have helped to achieve a greater understanding of damage generation, evolution, amorphization and SPER mechanisms in Ge, and also provide a useful tool to continue research in this field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent decades, full electric and hybrid electric vehicles have emerged as an alternative to conventional cars due to a range of factors, including environmental and economic aspects. These vehicles are the result of considerable efforts to seek ways of reducing the use of fossil fuel for vehicle propulsion. Sophisticated technologies such as hybrid and electric powertrains require careful study and optimization. Mathematical models play a key role at this point. Currently, many advanced mathematical analysis tools, as well as computer applications have been built for vehicle simulation purposes. Given the great interest of hybrid and electric powertrains, along with the increasing importance of reliable computer-based models, the author decided to integrate both aspects in the research purpose of this work. Furthermore, this is one of the first final degree projects held at the ETSII (Higher Technical School of Industrial Engineers) that covers the study of hybrid and electric propulsion systems. The present project is based on MBS3D 2.0, a specialized software for the dynamic simulation of multibody systems developed at the UPM Institute of Automobile Research (INSIA). Automobiles are a clear example of complex multibody systems, which are present in nearly every field of engineering. The work presented here benefits from the availability of MBS3D software. This program has proven to be a very efficient tool, with a highly developed underlying mathematical formulation. On this basis, the focus of this project is the extension of MBS3D features in order to be able to perform dynamic simulations of hybrid and electric vehicle models. This requires the joint simulation of the mechanical model of the vehicle, together with the model of the hybrid or electric powertrain. These sub-models belong to completely different physical domains. In fact the powertrain consists of energy storage systems, electrical machines and power electronics, connected to purely mechanical components (wheels, suspension, transmission, clutch…). The challenge today is to create a global vehicle model that is valid for computer simulation. Therefore, the main goal of this project is to apply co-simulation methodologies to a comprehensive model of an electric vehicle, where sub-models from different areas of engineering are coupled. The created electric vehicle (EV) model consists of a separately excited DC electric motor, a Li-ion battery pack, a DC/DC chopper converter and a multibody vehicle model. Co-simulation techniques allow car designers to simulate complex vehicle architectures and behaviors, which are usually difficult to implement in a real environment due to safety and/or economic reasons. In addition, multi-domain computational models help to detect the effects of different driving patterns and parameters and improve the models in a fast and effective way. Automotive designers can greatly benefit from a multidisciplinary approach of new hybrid and electric vehicles. In this case, the global electric vehicle model includes an electrical subsystem and a mechanical subsystem. The electrical subsystem consists of three basic components: electric motor, battery pack and power converter. A modular representation is used for building the dynamic model of the vehicle drivetrain. This means that every component of the drivetrain (submodule) is modeled separately and has its own general dynamic model, with clearly defined inputs and outputs. Then, all the particular submodules are assembled according to the drivetrain configuration and, in this way, the power flow across the components is completely determined. Dynamic models of electrical components are often based on equivalent circuits, where Kirchhoff’s voltage and current laws are applied to draw the algebraic and differential equations. Here, Randles circuit is used for dynamic modeling of the battery and the electric motor is modeled through the analysis of the equivalent circuit of a separately excited DC motor, where the power converter is included. The mechanical subsystem is defined by MBS3D equations. These equations consider the position, velocity and acceleration of all the bodies comprising the vehicle multibody system. MBS3D 2.0 is entirely written in MATLAB and the structure of the program has been thoroughly studied and understood by the author. MBS3D software is adapted according to the requirements of the applied co-simulation method. Some of the core functions are modified, such as integrator and graphics, and several auxiliary functions are added in order to compute the mathematical model of the electrical components. By coupling and co-simulating both subsystems, it is possible to evaluate the dynamic interaction among all the components of the drivetrain. ‘Tight-coupling’ method is used to cosimulate the sub-models. This approach integrates all subsystems simultaneously and the results of the integration are exchanged by function-call. This means that the integration is done jointly for the mechanical and the electrical subsystem, under a single integrator and then, the speed of integration is determined by the slower subsystem. Simulations are then used to show the performance of the developed EV model. However, this project focuses more on the validation of the computational and mathematical tool for electric and hybrid vehicle simulation. For this purpose, a detailed study and comparison of different integrators within the MATLAB environment is done. Consequently, the main efforts are directed towards the implementation of co-simulation techniques in MBS3D software. In this regard, it is not intended to create an extremely precise EV model in terms of real vehicle performance, although an acceptable level of accuracy is achieved. The gap between the EV model and the real system is filled, in a way, by introducing the gas and brake pedals input, which reflects the actual driver behavior. This input is included directly in the differential equations of the model, and determines the amount of current provided to the electric motor. For a separately excited DC motor, the rotor current is proportional to the traction torque delivered to the car wheels. Therefore, as it occurs in the case of real vehicle models, the propulsion torque in the mathematical model is controlled through acceleration and brake pedal commands. The designed transmission system also includes a reduction gear that adapts the torque coming for the motor drive and transfers it. The main contribution of this project is, therefore, the implementation of a new calculation path for the wheel torques, based on performance characteristics and outputs of the electric powertrain model. Originally, the wheel traction and braking torques were input to MBS3D through a vector directly computed by the user in a MATLAB script. Now, they are calculated as a function of the motor current which, in turn, depends on the current provided by the battery pack across the DC/DC chopper converter. The motor and battery currents and voltages are the solutions of the electrical ODE (Ordinary Differential Equation) system coupled to the multibody system. Simultaneously, the outputs of MBS3D model are the position, velocity and acceleration of the vehicle at all times. The motor shaft speed is computed from the output vehicle speed considering the wheel radius, the gear reduction ratio and the transmission efficiency. This motor shaft speed, somehow available from MBS3D model, is then introduced in the differential equations corresponding to the electrical subsystem. In this way, MBS3D and the electrical powertrain model are interconnected and both subsystems exchange values resulting as expected with tight-coupling approach.When programming mathematical models of complex systems, code optimization is a key step in the process. A way to improve the overall performance of the integration, making use of C/C++ as an alternative programming language, is described and implemented. Although this entails a higher computational burden, it leads to important advantages regarding cosimulation speed and stability. In order to do this, it is necessary to integrate MATLAB with another integrated development environment (IDE), where C/C++ code can be generated and executed. In this project, C/C++ files are programmed in Microsoft Visual Studio and the interface between both IDEs is created by building C/C++ MEX file functions. These programs contain functions or subroutines that can be dynamically linked and executed from MATLAB. This process achieves reductions in simulation time up to two orders of magnitude. The tests performed with different integrators, also reveal the stiff character of the differential equations corresponding to the electrical subsystem, and allow the improvement of the cosimulation process. When varying the parameters of the integration and/or the initial conditions of the problem, the solutions of the system of equations show better dynamic response and stability, depending on the integrator used. Several integrators, with variable and non-variable step-size, and for stiff and non-stiff problems are applied to the coupled ODE system. Then, the results are analyzed, compared and discussed. From all the above, the project can be divided into four main parts: 1. Creation of the equation-based electric vehicle model; 2. Programming, simulation and adjustment of the electric vehicle model; 3. Application of co-simulation methodologies to MBS3D and the electric powertrain subsystem; and 4. Code optimization and study of different integrators. Additionally, in order to deeply understand the context of the project, the first chapters include an introduction to basic vehicle dynamics, current classification of hybrid and electric vehicles and an explanation of the involved technologies such as brake energy regeneration, electric and non-electric propulsion systems for EVs and HEVs (hybrid electric vehicles) and their control strategies. Later, the problem of dynamic modeling of hybrid and electric vehicles is discussed. The integrated development environment and the simulation tool are also briefly described. The core chapters include an explanation of the major co-simulation methodologies and how they have been programmed and applied to the electric powertrain model together with the multibody system dynamic model. Finally, the last chapters summarize the main results and conclusions of the project and propose further research topics. In conclusion, co-simulation methodologies are applicable within the integrated development environments MATLAB and Visual Studio, and the simulation tool MBS3D 2.0, where equation-based models of multidisciplinary subsystems, consisting of mechanical and electrical components, are coupled and integrated in a very efficient way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This computer simulation is based on a model of the origin of life proposed by H. Kuhn and J. Waser, where the evolution of short molecular strands is assumed to take place in a distinct spatiotemporal structured environment. In their model, the prebiotic situation is strongly simplified to grasp essential features of the evolution of the genetic apparatus without attempts to trace the historic path. With the tool of computer implementation confining to principle aspects and focused on critical features of the model, a deeper understanding of the model's premises is achieved. Each generation consists of three steps: (i) construction of devices (entities exposed to selection) presently available; (ii) selection; and (iii) multiplication of the isolated strands (R oligomers) by complementary copying with occasional variation by copying mismatch. In the beginning, the devices are single strands with random sequences; later, increasingly complex aggregates of strands form devices such as a hairpin-assembler device which develop in favorable cases. A monomers interlink by binding to the hairpin-assembler device, and a translation machinery, called the hairpin-assembler-enzyme device, emerges, which translates the sequence of R1 and R2 monomers in the assembler strand to the sequence of A1 and A2 monomers in the A oligomer, working as an enzyme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has become clear that many organisms possess the ability to regulate their mutation rate in response to environmental conditions. So the question of finding an optimal mutation rate must be replaced by that of finding an optimal mutation schedule. We show that this task cannot be accomplished with standard population-dynamic models. We then develop a "hybrid" model for populations experiencing time-dependent mutation that treats population growth as deterministic but the time of first appearance of new variants as stochastic. We show that the hybrid model agrees well with a Monte Carlo simulation. From this model, we derive a deterministic approximation, a "threshold" model, that is similar to standard population dynamic models but differs in the initial rate of generation of new mutants. We use these techniques to model antibody affinity maturation by somatic hypermutation. We had previously shown that the optimal mutation schedule for the deterministic threshold model is phasic, with periods of mutation between intervals of mutation-free growth. To establish the validity of this schedule, we now show that the phasic schedule that optimizes the deterministic threshold model significantly improves upon the best constant-rate schedule for the hybrid and Monte Carlo models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cold climate anomaly about 8200 years ago is investigated with CLIMBER-2, a coupled atmosphere-ocean-biosphere model of intermediate complexity. This climate model simulates a cooling of about 3.6 K over the North Atlantic induced by a meltwater pulse from Lake Agassiz routed through the Hudson strait. The meltwater pulse is assumed to have a volume of 1.6 x 10^14 m^3 and a period of discharge of 2 years on the basis of glaciological modeling of the decay of the Laurentide Ice Sheet ( LIS). We present a possible mechanism which can explain the centennial duration of the 8.2 ka cold event. The mechanism is related to the existence of an additional equilibrium climate state with reduced North Atlantic Deep Water (NADW) formation and a southward shift of the NADW formation area. Hints at the additional climate state were obtained from the largely varying duration of the pulse-induced cold episode in response to overlaid random freshwater fluctuations in Monte Carlo simulations. The model equilibrium state was attained by releasing a weak multicentury freshwater flux through the St. Lawrence pathway completed by the meltwater pulse. The existence of such a climate mode appears essential for reproducing climate anomalies in close agreement with paleoclimatic reconstructions of the 8.2 ka event. The results furthermore suggest that the temporal evolution of the cold event was partly a matter of chance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Friction in hydrodynamic bearings are a major source of losses in car engines ([69]). The extreme loading conditions in those bearings lead to contact between the matching surfaces. In such conditions not only the overall geometry of the bearing is relevant, but also the small-scale topography of the surface determines the bearing performance. The possibility of shaping the surface of lubricated bearings down to the micrometer ([57]) opened the question of whether friction can be reduced by mean of micro-textures, with mixed results. This work focuses in the development of efficient numerical methods to solve thin film (lubrication) problems down to the roughness scale of measured surfaces. Due to the high velocities and the convergent-divergent geometries of hydrodynamic bearings, cavitation takes place. To treat cavitation in the lubrication problem the Elrod- Adams model is used, a mass-conserving model which has proven in careful numerical ([12]) and experimental ([119]) tests to be essential to obtain physically meaningful results. Another relevant aspect of the modeling is that the bearing inertial effects are considered, which is necessary to correctly simulate moving textures. As an application, the effects of micro-texturing the moving surface of the bearing were studied. Realistic values are assumed for the physical parameters defining the problems. Extensive fundamental studies were carried out in the hydrodynamic lubrication regime. Mesh-converged simulations considering the topography of real measured surfaces were also run, and the validity of the lubrication approximation was assessed for such rough surfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual Worlds Generator is a grammatical model that is proposed to define virtual worlds. It integrates the diversity of sensors and interaction devices, multimodality and a virtual simulation system. Its grammar allows the definition and abstraction in symbols strings of the scenes of the virtual world, independently of the hardware that is used to represent the world or to interact with it. A case study is presented to explain how to use the proposed model to formalize a robot navigation system with multimodal perception and a hybrid control scheme of the robot.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Virtual Worlds Generator is a grammatical model that is proposed to define virtual worlds. It integrates the diversity of sensors and interaction devices, multimodality and a virtual simulation system. Its grammar allows the definition and abstraction in symbols strings of the scenes of the virtual world, independently of the hardware that is used to represent the world or to interact with it. A case study is presented to explain how to use the proposed model to formalize a robot navigation system with multimodal perception and a hybrid control scheme of the robot. The result is an instance of the model grammar that implements the robotic system and is independent of the sensing devices used for perception and interaction. As a conclusion the Virtual Worlds Generator adds value in the simulation of virtual worlds since the definition can be done formally and independently of the peculiarities of the supporting devices.