962 resultados para Two variable oregonator model
Resumo:
In this article, we perform an extensive study of flavor observables in a two-Higgs-doublet model with generic Yukawa structure (of type III). This model is interesting not only because it is the decoupling limit of the minimal supersymmetric standard model but also because of its rich flavor phenomenology which also allows for sizable effects not only in flavor-changing neutral-current (FCNC) processes but also in tauonic B decays. We examine the possible effects in flavor physics and constrain the model both from tree-level processes and from loop observables. The free parameters of the model are the heavy Higgs mass, tanβ (the ratio of vacuum expectation values) and the “nonholomorphic” Yukawa couplings ϵfij(f=u,d,ℓ). In our analysis we constrain the elements ϵfij in various ways: In a first step we give order of magnitude constraints on ϵfij from ’t Hooft’s naturalness criterion, finding that all ϵfij must be rather small unless the third generation is involved. In a second step, we constrain the Yukawa structure of the type-III two-Higgs-doublet model from tree-level FCNC processes (Bs,d→μ+μ−, KL→μ+μ−, D¯¯¯0→μ+μ−, ΔF=2 processes, τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−) and observe that all flavor off-diagonal elements of these couplings, except ϵu32,31 and ϵu23,13, must be very small in order to satisfy the current experimental bounds. In a third step, we consider Higgs mediated loop contributions to FCNC processes [b→s(d)γ, Bs,d mixing, K−K¯¯¯ mixing and μ→eγ] finding that also ϵu13 and ϵu23 must be very small, while the bounds on ϵu31 and ϵu32 are especially weak. Furthermore, considering the constraints from electric dipole moments we obtain constrains on some parameters ϵu,ℓij. Taking into account the constraints from FCNC processes we study the size of possible effects in the tauonic B decays (B→τν, B→Dτν and B→D∗τν) as well as in D(s)→τν, D(s)→μν, K(π)→eν, K(π)→μν and τ→K(π)ν which are all sensitive to tree-level charged Higgs exchange. Interestingly, the unconstrained ϵu32,31 are just the elements which directly enter the branching ratios for B→τν, B→Dτν and B→D∗τν. We show that they can explain the deviations from the SM predictions in these processes without fine-tuning. Furthermore, B→τν, B→Dτν and B→D∗τν can even be explained simultaneously. Finally, we give upper limits on the branching ratios of the lepton flavor-violating neutral B meson decays (Bs,d→μe, Bs,d→τe and Bs,d→τμ) and correlate the radiative lepton decays (τ→μγ, τ→eγ and μ→eγ) to the corresponding neutral current lepton decays (τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−). A detailed Appendix contains all relevant information for the considered processes for general scalar-fermion-fermion couplings.
Resumo:
Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.
Resumo:
Though E2F1 is deregulated in most human cancers by mutations of the p16-cyclin D-Rb pathway, it also exhibits tumor suppressive activity. A transgenic mouse model overexpressing E2F1 under the control of the bovine keratin 5 (K5) promoter exhibits epidermal hyperplasia and spontaneously develops tumors in the skin and other epithelial tissues after one year of age. In a p53-deficient background, aberrant apoptosis in K5 E2F1 transgenic epidermis is reduced and tumorigenesis is accelerated. In sharp contrast, K5 E2F1 transgenic mice are resistant to papilloma formation in the DMBA/TPA two-stage carcinogenesis protocol. K5 E2F4 and K5 DP1 transgenic mice were also characterized and both display epidermal hyperplasia but do not develop spontaneous tumors even in cooperation with p53 deficiency. These transgenic mice do not have increased levels of apoptosis in their skin and are more susceptible to papilloma formation in the two-stage carcinogenesis model. These studies show that deregulated proliferation does not necessarily lead to tumor formation and that the ability to suppress skin carcinogenesis is unique to E2F1. E2F1 can also suppress skin carcinogenesis when okadaic acid is used as the tumor promoter and when a pre-initiated mouse model is used, demonstrating that E2F1's tumor suppressive activity is not specific for TPA and occurs at the promotion stage. E2F1 was thought to induce p53-dependent apoptosis through upregulation of p19ARF tumor suppressor, which inhibits mdm2-mediated p53 degradation. Consistent with in vitro studies, the overexpression of E2F1 in mouse skin results in the transcriptional activation of the p19ARF and the accumulation of p53. Inactivation of either p19ARF or p53 restores the sensitivity of K5 E2F1 transgenic mice to DMBA/TPA carcinogenesis, demonstrating that an intact p19ARF-p53 pathway is necessary for E2F1 to suppress carcinogenesis. Surprisingly, while p53 is required for E2F1 to induce apoptosis in mouse skin, p19ARF is not, and inactivation of p19ARF actually enhances E2F1-induced apoptosis and proliferation in transgenic epidermis. This indicates that ARF is important for E2F1-induced tumor suppression but not apoptosis. Senescence is another potential mechanism of tumor suppression that involves p53 and p19ARF. K5 E2F1 transgenic mice initiated with DMBA and treated with TPA show an increased number of senescence cells in their epidermis. These experiments demonstrate that E2F1's unique tumor suppressive activity in two-stage skin carcinogenesis can be genetically separated from E2F1-induced apoptosis and suggest that senescence utilizing the p19ARF-p53 pathway plays a role in tumor suppression by E2F1. ^
Resumo:
The Jinshajiang suture zone, located in the eastern part of the Tethyan tectonic domain, is noticeable for a large-scale distribution of Late Jurassic to Triassic granitoids. These granitoids were genetically related to the evolution of the Paleo-Tethys Ocean. The Beiwu, Linong and Lunong granitoids occur in the middle zone of the Jinshajiang Suture Zone, and possess similar geochemical features, indicating they share a common magma source. SIMS zircon U-Pb dating reveals the Beiwu, Linong and Lunong granitic intrusions were emplaced at 233.9±1.4 Ma (2 sigma), 233.1 ±1.4 Ma (2 sigma) and 231.0±1.6 Ma (2 sigma), respectively. All of these granitoids are enriched in abundances of Si (SiO2 =65.2-73.5 wt.%), and large-ion-lithophile-elements (LILEs), but depleted in high-field-strength-elements contents (HFSEs, e.g., Nb, Ta, Ti). In addition, they have low P2O5 contents (0.06-0.11 wt.%), A/CNK values ([molecular Al2O3/(CaO+Na2O+K2O)], mostly<1.1) and 10000Ga/Al ratios (1.7-2.2), consistent with the characteristics of I-type granites. In terms of isotopic compositions, these granitoids have high initial 87Sr/86Sr ratios (0.7078-0.7148), Pb isotopic compositions [(206Pb/204Pb)t=18.213-18.598, (207Pb/204Pb)t=15.637-15.730 and (208Pb/204Pb)t=38.323-38.791], zircon d18O values (7. per mil-9.3 per mil) and negative eNd(t) values (-5.1 to -6.7), suggesting they were predominantly derived from the continental crust. Their Nb/Ta ratios (average value=8.6) are consistent with those of the lower continental crust (LCC). However, variable ?Hf(t) values (-8.6 to +2.8) and the occurrences of mafic microgranular enclaves (MMEs) suggest that mantle-derived melts and lower crustal magmas were involved in the generation of these granitoids. Moreover, the high Pb isotopic ratios and elevated zircon d18O values of these rocks indicate a significant contribution of the upper crustal composition. We propose a model in which the Beiwu, Linong and Lunong granitoids were generated under a late collisional or post-collisional setting. It is possible that this collision was completed before Late Triassic. Decompression induced mantle-derived magmas underplated and provided the heat for the anatexis of the crust. Hybrid melts including mantle-derived and the lower crustal magmas were then generated. The hybrid melts thereafter ascended to a shallow depth and resulted in some degree of sedimentary rocks assimilation. Such three-component mixing magmas source and subsequent fractional crystallization could be responsible for the formation of the Beiwu, Linong and Lunong granitoids.
Resumo:
Total organic carbon to total nitrogen ratio (C/N) and their isotopic composition (d13CTOC vs. d15NTN) are oft-applied proxies to discern terrigenous from marine sourced organics and to unravel the ancient environmental information. In high depositional Asian marginal seas, matrixes, including N-bearing minerals, dilution leads to illusive and even contradictive interpretations. We use KOH-KOBr to separate operationally defined total organic matter into oxidizable (labile) and residual fractions for content and isotope measurements. In a sediment core in the Okinawa Trough, significant amounts of carbon and nitrogen existed in the residual phase, in which the C/N ratio was ~9 resembling most documented sedimentary bulk C/N ratios in the China marginal seas. Such similarity creates a pseudo-C/N interrupting the application of bulk C/N. The residual carbon, though composition unknown, it displayed a d13C range (-22.7 to -18.9 per mil, mean -20.7 per mil) similar to black carbon (-24.0 to -22.8 per mil) in East China Sea surface sediments. After removing residual fraction, we found the temporal pattern of d13CLOC in labile fraction (LOC) was more variable but broadly agreed with the atmospheric pCO2-induced changes in marine endmember d13C. Thus, we suggested adding pCO2-induced endmember modulation into two-endmember mixing model for paleo-environment reconstruction. Meanwhile, the residual nitrogen revealed an intimate association with illite content suggesting its terrestrial origin. Additionally, d15N in residual fraction likely carried the climate imprint from land. Further studies are required to explore the controlling factors for carbon and nitrogen isotopic speciation and to retrieve the information locked in the residual fraction.
Resumo:
Abstract This paper describes a two-part methodology for managing the risk posed by water supply variability to irrigated agriculture. First, an econometric model is used to explain the variation in the production value of irrigated agriculture. The explanatory variables include an index of irrigation water availability (surface storage levels), a price index representative of the crops grown in each geographical unit, and a time variable. The model corrects for autocorrelation and it is applied to 16 representative Spanish provinces in terms of irrigated agriculture. In the second part, the fitted models are used for the economic evaluation of drought risk. In flow variability in the hydrological system servicing each province is used to perform ex-ante evaluations of economic output for the upcoming irrigation season. The model?s error and the probability distribution functions (PDFs) of the reservoirs? storage variations are used to generate Monte Carlo (Latin Hypercube) simulations of agricultural output 7 and 3 months prior to the irrigation season. The results of these simulations illustrate the different risk profiles of each management unit, which depend on farm productivity and on the probability distribution function of water in flow to reservoirs. The potential for ex-ante drought impact assessments is demonstrated. By complementing hydrological models, this method can assist water managers and decisionmakers in managing reservoirs.
Resumo:
This paper presents a theoretical framework intended to accommodate circuit devices described by characteristics involving more than two fundamental variables. This framework is motivated by the recent appearance of a variety of so-called mem-devices in circuit theory, and makes it possible to model the coexistence of memory effects of different nature in a single device. With a compact formalism, this setting accounts for classical devices and also for circuit elements which do not admit a two-variable description. Fully nonlinear characteristics are allowed for all devices, driving the analysis beyond the framework of Chua and Di Ventra We classify these fully nonlinear circuit elements in terms of the variables involved in their constitutive relations and the notions of the differential- and the state-order of a device. We extend the notion of a topologically degenerate configuration to this broader context, and characterize the differential-algebraic index of nodal models of such circuits. Additionally, we explore certain dynamical features of mem-circuits involving manifolds of non-isolated equilibria. Related bifurcation phenomena are explored for a family of nonlinear oscillators based on mem-devices.
Resumo:
Los accidentes del tráfico son un fenómeno social muy relevantes y una de las principales causas de mortalidad en los países desarrollados. Para entender este fenómeno complejo se aplican modelos econométricos sofisticados tanto en la literatura académica como por las administraciones públicas. Esta tesis está dedicada al análisis de modelos macroscópicos para los accidentes del tráfico en España. El objetivo de esta tesis se puede dividir en dos bloques: a. Obtener una mejor comprensión del fenómeno de accidentes de trafico mediante la aplicación y comparación de dos modelos macroscópicos utilizados frecuentemente en este área: DRAG y UCM, con la aplicación a los accidentes con implicación de furgonetas en España durante el período 2000-2009. Los análisis se llevaron a cabo con enfoque frecuencista y mediante los programas TRIO, SAS y TRAMO/SEATS. b. La aplicación de modelos y la selección de las variables más relevantes, son temas actuales de investigación y en esta tesis se ha desarrollado y aplicado una metodología que pretende mejorar, mediante herramientas teóricas y prácticas, el entendimiento de selección y comparación de los modelos macroscópicos. Se han desarrollado metodologías tanto para selección como para comparación de modelos. La metodología de selección de modelos se ha aplicado a los accidentes mortales ocurridos en la red viaria en el período 2000-2011, y la propuesta metodológica de comparación de modelos macroscópicos se ha aplicado a la frecuencia y la severidad de los accidentes con implicación de furgonetas en el período 2000-2009. Como resultado de los desarrollos anteriores se resaltan las siguientes contribuciones: a. Profundización de los modelos a través de interpretación de las variables respuesta y poder de predicción de los modelos. El conocimiento sobre el comportamiento de los accidentes con implicación de furgonetas se ha ampliado en este proceso. bl. Desarrollo de una metodología para selección de variables relevantes para la explicación de la ocurrencia de accidentes de tráfico. Teniendo en cuenta los resultados de a) la propuesta metodológica se basa en los modelos DRAG, cuyos parámetros se han estimado con enfoque bayesiano y se han aplicado a los datos de accidentes mortales entre los años 2000-2011 en España. Esta metodología novedosa y original se ha comparado con modelos de regresión dinámica (DR), que son los modelos más comunes para el trabajo con procesos estocásticos. Los resultados son comparables, y con la nueva propuesta se realiza una aportación metodológica que optimiza el proceso de selección de modelos, con escaso coste computacional. b2. En la tesis se ha diseñado una metodología de comparación teórica entre los modelos competidores mediante la aplicación conjunta de simulación Monte Cario, diseño de experimentos y análisis de la varianza ANOVA. Los modelos competidores tienen diferentes estructuras, que afectan a la estimación de efectos de las variables explicativas. Teniendo en cuenta el estudio desarrollado en bl) este desarrollo tiene el propósito de determinar como interpretar la componente de tendencia estocástica que un modelo UCM modela explícitamente, a través de un modelo DRAG, que no tiene un método específico para modelar este elemento. Los resultados de este estudio son importantes para ver si la serie necesita ser diferenciada antes de modelar. b3. Se han desarrollado nuevos algoritmos para realizar los ejercicios metodológicos, implementados en diferentes programas como R, WinBUGS, y MATLAB. El cumplimiento de los objetivos de la tesis a través de los desarrollos antes enunciados se remarcan en las siguientes conclusiones: 1. El fenómeno de accidentes del tráfico se ha analizado mediante dos modelos macroscópicos. Los efectos de los factores de influencia son diferentes dependiendo de la metodología aplicada. Los resultados de predicción son similares aunque con ligera superioridad de la metodología DRAG. 2. La metodología para selección de variables y modelos proporciona resultados prácticos en cuanto a la explicación de los accidentes de tráfico. La predicción y la interpretación también se han mejorado mediante esta nueva metodología. 3. Se ha implementado una metodología para profundizar en el conocimiento de la relación entre las estimaciones de los efectos de dos modelos competidores como DRAG y UCM. Un aspecto muy importante en este tema es la interpretación de la tendencia mediante dos modelos diferentes de la que se ha obtenido información muy útil para los investigadores en el campo del modelado. Los resultados han proporcionado una ampliación satisfactoria del conocimiento en torno al proceso de modelado y comprensión de los accidentes con implicación de furgonetas y accidentes mortales totales en España. ABSTRACT Road accidents are a very relevant social phenomenon and one of the main causes of death in industrialized countries. Sophisticated econometric models are applied in academic work and by the administrations for a better understanding of this very complex phenomenon. This thesis is thus devoted to the analysis of macro models for road accidents with application to the Spanish case. The objectives of the thesis may be divided in two blocks: a. To achieve a better understanding of the road accident phenomenon by means of the application and comparison of two of the most frequently used macro modelings: DRAG (demand for road use, accidents and their gravity) and UCM (unobserved components model); the application was made to van involved accident data in Spain in the period 2000-2009. The analysis has been carried out within the frequentist framework and using available state of the art software, TRIO, SAS and TRAMO/SEATS. b. Concern on the application of the models and on the relevant input variables to be included in the model has driven the research to try to improve, by theoretical and practical means, the understanding on methodological choice and model selection procedures. The theoretical developments have been applied to fatal accidents during the period 2000-2011 and van-involved road accidents in 2000-2009. This has resulted in the following contributions: a. Insight on the models has been gained through interpretation of the effect of the input variables on the response and prediction accuracy of both models. The behavior of van-involved road accidents has been explained during this process. b1. Development of an input variable selection procedure, which is crucial for an efficient choice of the inputs. Following the results of a) the procedure uses the DRAG-like model. The estimation is carried out within the Bayesian framework. The procedure has been applied for the total road accident data in Spain in the period 2000-2011. The results of the model selection procedure are compared and validated through a dynamic regression model given that the original data has a stochastic trend. b2. A methodology for theoretical comparison between the two models through Monte Carlo simulation, computer experiment design and ANOVA. The models have a different structure and this affects the estimation of the effects of the input variables. The comparison is thus carried out in terms of the effect of the input variables on the response, which is in general different, and should be related. Considering the results of the study carried out in b1) this study tries to find out how a stochastic time trend will be captured in DRAG model, since there is no specific trend component in DRAG. Given the results of b1) the findings of this study are crucial in order to see if the estimation of data with stochastic component through DRAG will be valid or whether the data need a certain adjustment (typically differencing) prior to the estimation. The model comparison methodology was applied to the UCM and DRAG models, considering that, as mentioned above, the UCM has a specific trend term while DRAG does not. b3. New algorithms were developed for carrying out the methodological exercises. For this purpose different softwares, R, WinBUGs and MATLAB were used. These objectives and contributions have been resulted in the following findings: 1. The road accident phenomenon has been analyzed by means of two macro models: The effects of the influential input variables may be estimated through the models, but it has been observed that the estimates vary from one model to the other, although prediction accuracy is similar, with a slight superiority of the DRAG methodology. 2. The variable selection methodology provides very practical results, as far as the explanation of road accidents is concerned. Prediction accuracy and interpretability have been improved by means of a more efficient input variable and model selection procedure. 3. Insight has been gained on the relationship between the estimates of the effects using the two models. A very relevant issue here is the role of trend in both models, relevant recommendations for the analyst have resulted from here. The results have provided a very satisfactory insight into both modeling aspects and the understanding of both van-involved and total fatal accidents behavior in Spain.
Resumo:
El agotamiento, la ausencia o, simplemente, la incertidumbre sobre la cantidad de las reservas de combustibles fósiles se añaden a la variabilidad de los precios y a la creciente inestabilidad en la cadena de aprovisionamiento para crear fuertes incentivos para el desarrollo de fuentes y vectores energéticos alternativos. El atractivo de hidrógeno como vector energético es muy alto en un contexto que abarca, además, fuertes inquietudes por parte de la población sobre la contaminación y las emisiones de gases de efecto invernadero. Debido a su excelente impacto ambiental, la aceptación pública del nuevo vector energético dependería, a priori, del control de los riesgos asociados su manipulación y almacenamiento. Entre estos, la existencia de un innegable riesgo de explosión aparece como el principal inconveniente de este combustible alternativo. Esta tesis investiga la modelización numérica de explosiones en grandes volúmenes, centrándose en la simulación de la combustión turbulenta en grandes dominios de cálculo en los que la resolución que es alcanzable está fuertemente limitada. En la introducción, se aborda una descripción general de los procesos de explosión. Se concluye que las restricciones en la resolución de los cálculos hacen necesario el modelado de los procesos de turbulencia y de combustión. Posteriormente, se realiza una revisión crítica de las metodologías disponibles tanto para turbulencia como para combustión, que se lleva a cabo señalando las fortalezas, deficiencias e idoneidad de cada una de las metodologías. Como conclusión de esta investigación, se obtiene que la única estrategia viable para el modelado de la combustión, teniendo en cuenta las limitaciones existentes, es la utilización de una expresión que describa la velocidad de combustión turbulenta en función de distintos parámetros. Este tipo de modelos se denominan Modelos de velocidad de llama turbulenta y permiten cerrar una ecuación de balance para la variable de progreso de combustión. Como conclusión también se ha obtenido, que la solución más adecuada para la simulación de la turbulencia es la utilización de diferentes metodologías para la simulación de la turbulencia, LES o RANS, en función de la geometría y de las restricciones en la resolución de cada problema particular. Sobre la base de estos hallazgos, el crea de un modelo de combustión en el marco de los modelos de velocidad de la llama turbulenta. La metodología propuesta es capaz de superar las deficiencias existentes en los modelos disponibles para aquellos problemas en los que se precisa realizar cálculos con una resolución moderada o baja. Particularmente, el modelo utiliza un algoritmo heurístico para impedir el crecimiento del espesor de la llama, una deficiencia que lastraba el célebre modelo de Zimont. Bajo este enfoque, el énfasis del análisis se centra en la determinación de la velocidad de combustión, tanto laminar como turbulenta. La velocidad de combustión laminar se determina a través de una nueva formulación capaz de tener en cuenta la influencia simultánea en la velocidad de combustión laminar de la relación de equivalencia, la temperatura, la presión y la dilución con vapor de agua. La formulación obtenida es válida para un dominio de temperaturas, presiones y dilución con vapor de agua más extenso de cualquiera de las formulaciones previamente disponibles. Por otra parte, el cálculo de la velocidad de combustión turbulenta puede ser abordado mediante el uso de correlaciones que permiten el la determinación de esta magnitud en función de distintos parámetros. Con el objetivo de seleccionar la formulación más adecuada, se ha realizado una comparación entre los resultados obtenidos con diversas expresiones y los resultados obtenidos en los experimentos. Se concluye que la ecuación debida a Schmidt es la más adecuada teniendo en cuenta las condiciones del estudio. A continuación, se analiza la importancia de las inestabilidades de la llama en la propagación de los frentes de combustión. Su relevancia resulta significativa para mezclas pobres en combustible en las que la intensidad de la turbulencia permanece moderada. Estas condiciones son importantes dado que son habituales en los accidentes que ocurren en las centrales nucleares. Por ello, se lleva a cabo la creación de un modelo que permita estimar el efecto de las inestabilidades, y en concreto de la inestabilidad acústica-paramétrica, en la velocidad de propagación de llama. El modelado incluye la derivación matemática de la formulación heurística de Bauwebs et al. para el cálculo de la incremento de la velocidad de combustión debido a las inestabilidades de la llama, así como el análisis de la estabilidad de las llamas con respecto a una perturbación cíclica. Por último, los resultados se combinan para concluir el modelado de la inestabilidad acústica-paramétrica. Tras finalizar esta fase, la investigación se centro en la aplicación del modelo desarrollado en varios problemas de importancia para la seguridad industrial y el posterior análisis de los resultados y la comparación de los mismos con los datos experimentales correspondientes. Concretamente, se abordo la simulación de explosiones en túneles y en contenedores, con y sin gradiente de concentración y ventilación. Como resultados generales, se logra validar el modelo confirmando su idoneidad para estos problemas. Como última tarea, se ha realizado un analisis en profundidad de la catástrofe de Fukushima-Daiichi. El objetivo del análisis es determinar la cantidad de hidrógeno que explotó en el reactor número uno, en contraste con los otros estudios sobre el tema que se han centrado en la determinación de la cantidad de hidrógeno generado durante el accidente. Como resultado de la investigación, se determinó que la cantidad más probable de hidrogeno que fue consumida durante la explosión fue de 130 kg. Es un hecho notable el que la combustión de una relativamente pequeña cantidad de hidrogeno pueda causar un daño tan significativo. Esta es una muestra de la importancia de este tipo de investigaciones. Las ramas de la industria para las que el modelo desarrollado será de interés abarca la totalidad de la futura economía de hidrógeno (pilas de combustible, vehículos, almacenamiento energético, etc) con un impacto especial en los sectores del transporte y la energía nuclear, tanto para las tecnologías de fisión y fusión. ABSTRACT The exhaustion, absolute absence or simply the uncertainty on the amount of the reserves of fossil fuels sources added to the variability of their prices and the increasing instability and difficulties on the supply chain are strong incentives for the development of alternative energy sources and carriers. The attractiveness of hydrogen in a context that additionally comprehends concerns on pollution and emissions is very high. Due to its excellent environmental impact, the public acceptance of the new energetic vector will depend on the risk associated to its handling and storage. Fromthese, the danger of a severe explosion appears as the major drawback of this alternative fuel. This thesis investigates the numerical modeling of large scale explosions, focusing on the simulation of turbulent combustion in large domains where the resolution achievable is forcefully limited. In the introduction, a general description of explosion process is undertaken. It is concluded that the restrictions of resolution makes necessary the modeling of the turbulence and combustion processes. Subsequently, a critical review of the available methodologies for both turbulence and combustion is carried out pointing out their strengths and deficiencies. As a conclusion of this investigation, it appears clear that the only viable methodology for combustion modeling is the utilization of an expression for the turbulent burning velocity to close a balance equation for the combustion progress variable, a model of the Turbulent flame velocity kind. Also, that depending on the particular resolution restriction of each problem and on its geometry the utilization of different simulation methodologies, LES or RANS, is the most adequate solution for modeling the turbulence. Based on these findings, the candidate undertakes the creation of a combustion model in the framework of turbulent flame speed methodology which is able to overcome the deficiencies of the available ones for low resolution problems. Particularly, the model utilizes a heuristic algorithm to maintain the thickness of the flame brush under control, a serious deficiency of the Zimont model. Under the approach utilized by the candidate, the emphasis of the analysis lays on the accurate determination of the burning velocity, both laminar and turbulent. On one side, the laminar burning velocity is determined through a newly developed correlation which is able to describe the simultaneous influence of the equivalence ratio, temperature, steam dilution and pressure on the laminar burning velocity. The formulation obtained is valid for a larger domain of temperature, steam dilution and pressure than any of the previously available formulations. On the other side, a certain number of turbulent burning velocity correlations are available in the literature. For the selection of the most suitable, they have been compared with experiments and ranked, with the outcome that the formulation due to Schmidt was the most adequate for the conditions studied. Subsequently, the role of the flame instabilities on the development of explosions is assessed. Their significance appears to be of importance for lean mixtures in which the turbulence intensity remains moderate. These are important conditions which are typical for accidents on Nuclear Power Plants. Therefore, the creation of a model to account for the instabilities, and concretely, the acoustic parametric instability is undertaken. This encloses the mathematical derivation of the heuristic formulation of Bauwebs et al. for the calculation of the burning velocity enhancement due to flame instabilities as well as the analysis of the stability of flames with respect to a cyclic velocity perturbation. The results are combined to build a model of the acoustic-parametric instability. The following task in this research has been to apply the model developed to several problems significant for the industrial safety and the subsequent analysis of the results and comparison with the corresponding experimental data was performed. As a part of such task simulations of explosions in a tunnel and explosions in large containers, with and without gradient of concentration and venting have been carried out. As a general outcome, the validation of the model is achieved, confirming its suitability for the problems addressed. As a last and final undertaking, a thorough study of the Fukushima-Daiichi catastrophe has been carried out. The analysis performed aims at the determination of the amount of hydrogen participating on the explosion that happened in the reactor one, in contrast with other analysis centered on the amount of hydrogen generated during the accident. As an outcome of the research, it was determined that the most probable amount of hydrogen exploding during the catastrophe was 130 kg. It is remarkable that the combustion of such a small quantity of material can cause tremendous damage. This is an indication of the importance of these types of investigations. The industrial branches that can benefit from the applications of the model developed in this thesis include the whole future hydrogen economy, as well as nuclear safety both in fusion and fission technology.
Resumo:
El método de Muskingum-Cunge, con más de 45 años de historia, sigue siendo uno de los más empleados a la hora de calcular el tránsito en un cauce. Una vez calibrado, permite realizar cálculos precisos, siendo asimismo mucho más rápido que los métodos que consideran las ecuaciones completas. Por esta razón, en el presente trabajo de investigación se llevó a cabo un análisis de su precisión, comparándolo con los resultados de un modelo hidráulico bidimensional. En paralelo se llevó a cabo un análisis de sus limitaciones y se ensayó una metodología práctica de aplicación. Con esta motivación se llevaron a cabo más de 200 simulaciones de tránsito en cauces prismáticos y naturales. Los cálculos se realizaron empleando el programa HEC-HMS con el método de Muskingum-Cunge de sección de 8 puntos, así como con la herramienta de cálculo hidráulico bidimensional InfoWorks ICM. Se eligieron HEC-HMS por su gran difusión e InfoWorks ICM por su rapidez de cálculo, pues emplea la tecnología CUDA (Arquitectura Unificada de Dispositivos de Cálculo). Inicialmente se validó el modelo hidráulico bidimensional contrastándolo con la formulación unidimensional en régimen uniforme y variado, así como con fórmulas analíticas de régimen variable, consiguiéndose resultados muy satisfactorios. También se llevó a cabo un análisis de la sensibilidad al mallado del modelo bidimensional aplicado a tránsitos, obteniéndose unos ábacos con tamaños recomendados de los elementos 2D que cuantifican el error cometido. Con la técnica del análisis dimensional se revisó una correlación de los resultados obtenidos entre ambos métodos, ponderando su precisión y definiendo intervalos de validez para la mejor utilización del método de Muskingum-Cunge. Simultáneamente se desarrolló una metodología que permite obtener la sección característica media de 8 puntos para el cálculo de un tránsito, basándose en una serie de simulaciones bidimensionales simplificadas. De este modo se pretende facilitar el uso y la correcta definición de los modelos hidrológicos. The Muskingum-Cunge methodology, which has been used for more 45 than years, is still one of the main procedures to calculate stream routing. Once calibrated, it gives precise results, and it is also much faster than other methods that consider the full hydraulic equations. Therefore, in the present investigation an analysis of its accuracy was carried out by comparing it with the results of a two-dimensional hydraulic model. At the same time, reasonable ranges of applicability as well as an iterative method for its adequate use were defined. With this motivation more than 200 simulations of stream routing were conducted in both synthetic and natural waterways. Calculations were performed with the aid of HEC-HMS choosing the Muskingum-Cunge 8 point cross-section method and in InfoWorks ICM, a two-dimensional hydraulic calculation software. HEC-HMS was chosen because its extensive use and InfoWorks ICM for its calculation speed as it takes advantage of the CUDA technology (Compute Unified Device Architecture). Initially, the two-dimensional hydraulic engine was compared to one-dimensional formulation in both uniform and varied flow. Then it was contrasted to variable flow analytical formulae, achieving most satisfactory results. A sensitivity size analysis of the two-dimensional rooting model mesh was also conduced, obtaining charts with suggested 2D element sizes to narrow the committed error. With the technique of dimensional analysis a correlation of results between the two methods was reviewed, assessing their accuracy and defining valid intervals for improved use of the Muskingum-Cunge method. Simultaneously, a methodology to draw a representative 8 point cross-section was developed, based on a sequence of simplified two-dimensional simulations. This procedure is intended to provide a simplified approach and accurate definition of hydrological models.
Resumo:
A estrutura populacional e o desequilíbrio de ligação são dois processos fundamentais para estudos evolutivos e de mapeamento associativo. Tradicionalmente, ambos têm sido investigados por meio de métodos clássicos comumente utilizados. Tais métodos certamente forneceram grandes avanços no entendimento dos processos evolutivos das espécies. No entanto, em geral, nenhum deles utiliza uma visão genealógica de forma a considerar eventos genéticos ocorridos no passado, dificultando a compreensão dos padrões de variação observados no presente. Uma abordagem que possibilita a investigação retrospectiva com base no atual polimorfismo observado é a teoria da coalescência. Assim, o objetivo deste trabalho foi analisar, com base na teoria da coalescência, a estrutura populacional e o desequilíbrio de ligação de um painel mundial de acessos de sorgo (Sorghum bicolor). Para tanto, análises de mutação, migração com fluxo gênico e recombinação foram realizadas para cinco regiões genômicas relacionadas à altura de plantas e maturidade (Dw1, Dw2, Dw4, Ma1 e Ma3) e sete populações previamente selecionadas. Em geral, elevado fluxo gênico médio (Μ = m/μ = 41,78 − 52,07) foi observado entre as populações considerando cada região genômica e todas elas simultaneamente. Os padrões sugeriram intenso intercâmbio de acessos e história evolutiva específica para cada região genômica, mostrando a importância da análise individual dos locos. A quantidade média de migrantes por geração (Μ) não foi simétrica entre pares recíprocos de populações, de acordo com a análise individual e simultânea das regiões. Isso sugere que a forma pela qual as populações se relacionaram e continuam interagindo evolutivamente não é igual, mostrando que os métodos clássicos utilizados para investigar estrutura populacional podem ser insatisfatórios. Baixas taxas médias de recombinação (ρL = 2Ner = 0,030 − 0,246) foram observadas utilizando o modelo de recombinação constante ao longo da região. Baixas e altas taxas médias de recombinação (ρr = 2Ner = 0,060 − 3,395) foram estimadas utilizando o modelo de recombinação variável ao longo da região. Os métodos tradicional (r2) e via coalescência (E[r2 rhomap]) utilizados para a estimação do desequilíbrio de ligação mostraram resultados próximos para algumas regiões genômicas e populações. No entanto, o r2 sugeriu padrões descontínuos de desequilíbrio em várias ocasiões, dificultando o entendimento e a caracterização de possíveis blocos de associação. O método via coalescência (E[r2 rhomap]) forneceu resultados que pareceram ter sido mais consistentes, podendo ser uma estratégia eventualmente importante para um refinamento dos padrões não-aleatórios de associação. Os resultados aqui encontrados sugerem que o mapeamento genético a partir de um único pool gênico pode ser insuficiente para detectar associações causais importantes para características quantitativas em sorgo.
Resumo:
In this paper we propose a two-component polarimetric model for soil moisture estimation on vineyards suited for C-band radar data. According to a polarimetric analysis carried out here, this scenario is made up of one dominant direct return from the soil and a multiple scattering component accounting for disturbing and nonmodeled signal fluctuations from soil and short vegetation. We propose a combined X-Bragg/Fresnel approach to characterize the polarized direct response from soil. A validation of this polarimetric model has been performed in terms of its consistency with respect to the available data both from RADARSAT-2 and from indoor measurements. High inversion rates are reported for different phenological stages of vines, and the model gives a consistent interpretation of the data as long as the volume component power remains about or below 50% of the surface contribution power. However, the scarcity of soil moisture measurements in this study prevents the validation of the algorithm in terms of the accuracy of soil moisture retrieval and an extensive campaign is required to fully demonstrate the validity of the model. Different sources of mismatches between the model and the data have been also discussed and analyzed.
Resumo:
The Import Substitution Process in Latin Amer ica was an attempt to enhance GDP growth and productivity by rising trade barriers upon capital-intensive products. Our main goal is to analyze how an increase in import tariff on a particular type of good affects the production choices and trade pattern of an economy. We develop an extension of the dynamic Heckscher-Ohlin model – a combination of a static two goods, two-factor Heckscher-Ohlin model and a two-sector growth model – allowing for import tariff. We then calibrate the closed economy model to the US. The results show that the economy will produce less of both consumption and investment goods under autarky for low and high levels of capital stock per worker. We also find that total GDP may be lower under free trade in comparison to autarky.
Resumo:
Subaerially erupted tholeiites at Hole 642E were never exposed to the high-temperature seawater circulation and alteration conditions that are found at subaqueous ridges. Alteration of Site 642 rocks is therefore the product of the interaction of rocks and fluids at low temperatures. The alteration mineralogy can thus be used to provide information on the geochemical effects of low temperature circulation of seawater. Rubidium-strontium systematics of leached and unleached tholeiites and underlying, continentally-derived dacites reflect interactions with seawater in fractures and vesicular flow tops. The secondary mineral assemblage in the tholeiites consists mainly of smectite, accompanied in a few flows by the assemblage celadonite + calcite (+/- native Cu). Textural relationships suggest that smectites formed early and that celadonite + calcite, which are at least in part cogenetic, formed later than and partially at the expense of smectite. Smectite precipitation occurred under variable, but generally low, water/rock conditions. The smectites contain much lower concentrations of alkali elements than has been reported in seafloor basalts, and sequentially leached fractions of smectite contain Sr that has not achieved isotopic equilibrium. 87Sr/86Sr results of the leaching experiments suggest that Sr was mostly derived from seawater during early periods of smectite precipitation. The basalt-like 87Sr/86Sr of the most readily exchangeable fraction seems to suggest a late period of exposure to very low water /rock. Smectite formation may have primarily occurred in the interval between the nearly 58-Ma age given by the lower series dacites and the 54.5 +/- 0.2 Ma model age given by a celadonite from the top of the tholeiitic section. The 54.5 +/- 0.2 Ma Rb-Sr model age may be recording the timing of foundering of the Voring Plateau. Celadonites precipitated in flows below the top of the tholeiitic section define a Rb-Sr isochron with a slope corresponding to an age of 24.3 +/- 0.4 Ma. This isochron may be reflecting mixing effects due to long-term chemical interaction between seawater and basalts, in which case the age provides only a minimum for the timing of late alteration. Alternatively, inferrential arguments can be made that the 24.3 +/- 0.4 isochron age reflects the timing of the late Oligocene-early Miocene erosional event that affected the Norwegian-Greenland Sea. Correlation of 87Sr/86Sr and 1/Sr in calcites results in a two-component mixing model for late alteration products. One end-member of the mixing trend is Eocene or younger seawater. Strontium from the nonradiogenic endmember can not, however, have been derived directly from the basalts. Rather, the data suggest that Sr in the calcites is a mixture of Sr derived from seawater and from pre-existing smectites. For Site 642, the reaction involved can be generalized as smectite + seawater ++ celadonite + calcite. The geochemical effects of this reaction include net gains of K and CO2 by the secondary mineral assemblage. The gross similarity of the reactions involved in late, low-temperature alteration at Site 642 to those observed in other sea floor basalts suggests that the transfer of K and C02 to the crust during low-temperature seawater-ocean crust interactions may be significant in calculations of global fluxes.
Resumo:
The research was aimed at developing a technology to combine the production of useful microfungi with the treatment of wastewater from food processing. A recycle bioreactor equipped with a micro-screen was developed as a wastewater treatment system on a laboratory scale to contain a Rhizopus culture and maintain its dominance under non-aseptic conditions. Competitive growth of bacteria was observed, but this was minimised by manipulation of the solids retention time and the hydraulic retention time. Removal of about 90% of the waste organic material (as BOD) from the wastewater was achieved simultaneously. Since essentially all fungi are retained behind the 100 mum aperture screen, the solids retention time could be controlled by the rate of harvesting. The hydraulic retention time was employed to control the bacterial growth as the bacteria were washed through the screen at a short HRT. A steady state model was developed to determine these two parameters. This model predicts the effluent quality. Experimental work is still needed to determine the growth characteristics of the selected fungal species under optimum conditions (pH and temperature).