954 resultados para Monte Carlo method
Resumo:
This work aims to evaluate the reliability of these levee systems, calculating the probability of “failure” of determined levee stretches under different loads, using probabilistic methods that take into account the fragility curves obtained through the Monte Carlo Method. For this study overtopping and piping are considered as failure mechanisms (since these are the most frequent) and the major levee system of the Po River with a primary focus on the section between Piacenza and Cremona, in the lower-middle area of the Padana Plain, is analysed. The novelty of this approach is to check the reliability of individual embankment stretches, not just a single section, while taking into account the variability of the levee system geometry from one stretch to another. This work takes also into consideration, for each levee stretch analysed, a probability distribution of the load variables involved in the definition of the fragility curves, where it is influenced by the differences in the topography and morphology of the riverbed along the sectional depth analysed as it pertains to the levee system in its entirety. A type of classification is proposed, for both failure mechanisms, to give an indication of the reliability of the levee system based of the information obtained by the fragility curve analysis. To accomplish this work, an hydraulic model has been developed where a 500-year flood is modelled to determinate the residual hazard value of failure for each stretch of levee near the corresponding water depth, then comparing the results with the obtained classifications. This work has the additional the aim of acting as an interface between the world of Applied Geology and Environmental Hydraulic Engineering where a strong collaboration is needed between the two professions to resolve and improve the estimation of hydraulic risk.
Resumo:
This report presents the development of a Stochastic Knock Detection (SKD) method for combustion knock detection in a spark-ignition engine using a model based design approach. Knock Signal Simulator (KSS) was developed as the plant model for the engine. The KSS as the plant model for the engine generates cycle-to-cycle accelerometer knock intensities following a stochastic approach with intensities that are generated using a Monte Carlo method from a lognormal distribution whose parameters have been predetermined from engine tests and dependent upon spark-timing, engine speed and load. The lognormal distribution has been shown to be a good approximation to the distribution of measured knock intensities over a range of engine conditions and spark-timings for multiple engines in previous studies. The SKD method is implemented in Knock Detection Module (KDM) which processes the knock intensities generated by KSS with a stochastic distribution estimation algorithm and outputs estimates of high and low knock intensity levels which characterize knock and reference level respectively. These estimates are then used to determine a knock factor which provides quantitative measure of knock level and can be used as a feedback signal to control engine knock. The knock factor is analyzed and compared with a traditional knock detection method to detect engine knock under various engine operating conditions. To verify the effectiveness of the SKD method, a knock controller was also developed and tested in a model-in-loop (MIL) system. The objective of the knock controller is to allow the engine to operate as close as possible to its border-line spark-timing without significant engine knock. The controller parameters were tuned to minimize the cycle-to-cycle variation in spark timing and the settling time of the controller in responding to step increase in spark advance resulting in the onset of engine knock. The simulation results showed that the combined system can be used adequately to model engine knock and evaluated knock control strategies for a wide range of engine operating conditions.
Resumo:
For half a century the integrated circuits (ICs) that make up the heart of electronic devices have been steadily improving by shrinking at an exponential rate. However, as the current crop of ICs get smaller and the insulating layers involved become thinner, electrons leak through due to quantum mechanical tunneling. This is one of several issues which will bring an end to this incredible streak of exponential improvement of this type of transistor device, after which future improvements will have to come from employing fundamentally different transistor architecture rather than fine tuning and miniaturizing the metal-oxide-semiconductor field effect transistors (MOSFETs) in use today. Several new transistor designs, some designed and built here at Michigan Tech, involve electrons tunneling their way through arrays of nanoparticles. We use a multi-scale approach to model these devices and study their behavior. For investigating the tunneling characteristics of the individual junctions, we use a first-principles approach to model conduction between sub-nanometer gold particles. To estimate the change in energy due to the movement of individual electrons, we use the finite element method to calculate electrostatic capacitances. The kinetic Monte Carlo method allows us to use our knowledge of these details to simulate the dynamics of an entire device— sometimes consisting of hundreds of individual particles—and watch as a device ‘turns on’ and starts conducting an electric current. Scanning tunneling microscopy (STM) and the closely related scanning tunneling spectroscopy (STS) are a family of powerful experimental techniques that allow for the probing and imaging of surfaces and molecules at atomic resolution. However, interpretation of the results often requires comparison with theoretical and computational models. We have developed a new method for calculating STM topographs and STS spectra. This method combines an established method for approximating the geometric variation of the electronic density of states, with a modern method for calculating spin-dependent tunneling currents, offering a unique balance between accuracy and accessibility.
Resumo:
Calmodulin (CaM) is a ubiquitous Ca(2+) buffer and second messenger that affects cellular function as diverse as cardiac excitability, synaptic plasticity, and gene transcription. In CA1 pyramidal neurons, CaM regulates two opposing Ca(2+)-dependent processes that underlie memory formation: long-term potentiation (LTP) and long-term depression (LTD). Induction of LTP and LTD require activation of Ca(2+)-CaM-dependent enzymes: Ca(2+)/CaM-dependent kinase II (CaMKII) and calcineurin, respectively. Yet, it remains unclear as to how Ca(2+) and CaM produce these two opposing effects, LTP and LTD. CaM binds 4 Ca(2+) ions: two in its N-terminal lobe and two in its C-terminal lobe. Experimental studies have shown that the N- and C-terminal lobes of CaM have different binding kinetics toward Ca(2+) and its downstream targets. This may suggest that each lobe of CaM differentially responds to Ca(2+) signal patterns. Here, we use a novel event-driven particle-based Monte Carlo simulation and statistical point pattern analysis to explore the spatial and temporal dynamics of lobe-specific Ca(2+)-CaM interaction at the single molecule level. We show that the N-lobe of CaM, but not the C-lobe, exhibits a nano-scale domain of activation that is highly sensitive to the location of Ca(2+) channels, and to the microscopic injection rate of Ca(2+) ions. We also demonstrate that Ca(2+) saturation takes place via two different pathways depending on the Ca(2+) injection rate, one dominated by the N-terminal lobe, and the other one by the C-terminal lobe. Taken together, these results suggest that the two lobes of CaM function as distinct Ca(2+) sensors that can differentially transduce Ca(2+) influx to downstream targets. We discuss a possible role of the N-terminal lobe-specific Ca(2+)-CaM nano-domain in CaMKII activation required for the induction of synaptic plasticity.
Resumo:
The decomposition of soil organic matter (SOM) is temperature dependent, but its response to a future warmer climate remains equivocal. Enhanced rates of decomposition of SOM under increased global temperatures might cause higher CO2 emissions to the atmosphere, and could therefore constitute a strong positive feedback. The magnitude of this feedback however remains poorly understood, primarily because of the difficulty in quantifying the temperature sensitivity of stored, recalcitrant carbon that comprises the bulk (>90%) of SOM in most soils. In this study we investigated the effects of climatic conditions on soil carbon dynamics using the attenuation of the 14C ‘bomb’ pulse as recorded in selected modern European speleothems. These new data were combined with published results to further examine soil carbon dynamics, and to explore the sensitivity of labile and recalcitrant organic matter decomposition to different climatic conditions. Temporal changes in 14C activity inferred from each speleothem was modelled using a three pool soil carbon inverse model (applying a Monte Carlo method) to constrain soil carbon turnover rates at each site. Speleothems from sites that are characterised by semi-arid conditions, sparse vegetation, thin soil cover and high mean annual air temperatures (MAATs), exhibit weak attenuation of atmospheric 14C ‘bomb’ peak (a low damping effect, D in the range: 55–77%) and low modelled mean respired carbon ages (MRCA), indicating that decomposition is dominated by young, recently fixed soil carbon. By contrast, humid and high MAAT sites that are characterised by a thick soil cover and dense, well developed vegetation, display the highest damping effect (D = c. 90%), and the highest MRCA values (in the range from 350 ± 126 years to 571 ± 128 years). This suggests that carbon incorporated into these stalagmites originates predominantly from decomposition of old, recalcitrant organic matter. SOM turnover rates cannot be ascribed to a single climate variable, e.g. (MAAT) but instead reflect a complex interplay of climate (e.g. MAAT and moisture budget) and vegetation development.
Resumo:
The quantum dimer model on the square lattice is a U(1) gauge theory that addresses aspects of the physics of high-Tc superconductors. Using a quantum Monte Carlo method, we show that the theory exists in a confining columnar valence bond solid phase. The interfaces separating distinct columnar phases display plaquette order, which, however, is not realized as a bulk phase. Static “electric” charges are confined by flux tubes that consist of multiple strands, each carrying a fractionalized flux ¼. A soft pseudo-Goldstone mode (which becomes exactly massless at the Rokhsar-Kivelson point) extends deep into the columnar phase, with potential implications for high-Tc physics.
Resumo:
Caregiving for individuals with Alzheimer's disease is associated with chronic stress and elevated symptoms of depression. Placement of the care receiver (CR) into a long-term care setting may be associated with improved caregiver well-being; however, the psychological mechanisms underlying this relationship are unclear. This study evaluated whether decreases in activity restriction and increases in personal mastery mediated placement-related reductions in caregiver depressive symptoms. In a 5-year longitudinal study of 126 spousal Alzheimer's disease caregivers, we used multilevel models to evaluate placement-related changes in depressive symptoms (short form of the Center for Epidemiologic Studies Depression scale), activity restriction (Activity Restriction Scale), and personal mastery (Pearlin Mastery Scale) in 44 caregivers who placed their spouses into long-term care relative to caregivers who never placed their CRs. The Monte Carlo method for assessing mediation was used to evaluate the significance of the indirect effect of activity restriction and personal mastery on postplacement changes in depressive symptoms. Placement of the CR was associated with significant reductions in depressive symptoms and activity restriction and was also associated with increased personal mastery. Lower activity restriction and higher personal mastery were associated with reduced depressive symptoms. Furthermore, both variables significantly mediated the effect of placement on depressive symptoms. Placement-related reductions in activity restriction and increases in personal mastery are important psychological factors that help explain postplacement reductions in depressive symptoms. The implications for clinical care provided to caregivers are discussed.
Resumo:
67P/Churyumov-Gerasimenko (67P) is a Jupiter-family comet and the object of investigation of the European Space Agency mission Rosetta. This report presents the first full 3D simulation results of 67P’s neutral gas coma. In this study we include results from a direct simulation Monte Carlo method, a hydrodynamic code, and a purely geometric calculation which computes the total illuminated surface area on the nucleus. All models include the triangulated 3D shape model of 67P as well as realistic illumination and shadowing conditions. The basic concept is the assumption that these illumination conditions on the nucleus are the main driver for the gas activity of the comet. As a consequence, the total production rate of 67P varies as a function of solar insolation. The best agreement between the model and the data is achieved when gas fluxes on the night side are in the range of 7% to 10% of the maximum flux, accounting for contributions from the most volatile components. To validate the output of our numerical simulations we compare the results of all three models to in situ gas number density measurements from the ROSINA COPS instrument. We are able to reproduce the overall features of these local neutral number density measurements of ROSINA COPS for the time period between early August 2014 and January 1 2015 with all three models. Some details in the measurements are not reproduced and warrant further investigation and refinement of the models. However, the overall assumption that illumination conditions on the nucleus are at least an important driver of the gas activity is validated by the models. According to our simulation results we find the total production rate of 67P to be constant between August and November 2014 with a value of about 1 × 10²⁶ molecules s⁻¹.
Resumo:
A Bayesian approach to estimation of the regression coefficients of a multinominal logit model with ordinal scale response categories is presented. A Monte Carlo method is used to construct the posterior distribution of the link function. The link function is treated as an arbitrary scalar function. Then the Gauss-Markov theorem is used to determine a function of the link which produces a random vector of coefficients. The posterior distribution of the random vector of coefficients is used to estimate the regression coefficients. The method described is referred to as a Bayesian generalized least square (BGLS) analysis. Two cases involving multinominal logit models are described. Case I involves a cumulative logit model and Case II involves a proportional-odds model. All inferences about the coefficients for both cases are described in terms of the posterior distribution of the regression coefficients. The results from the BGLS method are compared to maximum likelihood estimates of the regression coefficients. The BGLS method avoids the nonlinear problems encountered when estimating the regression coefficients of a generalized linear model. The method is not complex or computationally intensive. The BGLS method offers several advantages over Bayesian approaches. ^
Resumo:
In regression analysis, covariate measurement error occurs in many applications. The error-prone covariates are often referred to as latent variables. In this proposed study, we extended the study of Chan et al. (2008) on recovering latent slope in a simple regression model to that in a multiple regression model. We presented an approach that applied the Monte Carlo method in the Bayesian framework to the parametric regression model with the measurement error in an explanatory variable. The proposed estimator applied the conditional expectation of latent slope given the observed outcome and surrogate variables in the multiple regression models. A simulation study was presented showing that the method produces estimator that is efficient in the multiple regression model, especially when the measurement error variance of surrogate variable is large.^
Resumo:
En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.
Resumo:
Los estudios realizados hasta el momento para la determinación de la calidad de medida del instrumental geodésico han estado dirigidos, fundamentalmente, a las medidas angulares y de distancias. Sin embargo, en los últimos años se ha impuesto la tendencia generalizada de utilizar equipos GNSS (Global Navigation Satellite System) en el campo de las aplicaciones geomáticas sin que se haya establecido una metodología que permita obtener la corrección de calibración y su incertidumbre para estos equipos. La finalidad de esta Tesis es establecer los requisitos que debe satisfacer una red para ser considerada Red Patrón con trazabilidad metrológica, así como la metodología para la verificación y calibración de instrumental GNSS en redes patrón. Para ello, se ha diseñado y elaborado un procedimiento técnico de calibración de equipos GNSS en el que se han definido las contribuciones a la incertidumbre de medida. El procedimiento, que se ha aplicado en diferentes redes para distintos equipos, ha permitido obtener la incertidumbre expandida de dichos equipos siguiendo las recomendaciones de la Guide to the Expression of Uncertainty in Measurement del Joint Committee for Guides in Metrology. Asimismo, se han determinado mediante técnicas de observación por satélite las coordenadas tridimensionales de las bases que conforman las redes consideradas en la investigación, y se han desarrollado simulaciones en función de diversos valores de las desviaciones típicas experimentales de los puntos fijos que se han utilizado en el ajuste mínimo cuadrático de los vectores o líneas base. Los resultados obtenidos han puesto de manifiesto la importancia que tiene el conocimiento de las desviaciones típicas experimentales en el cálculo de incertidumbres de las coordenadas tridimensionales de las bases. Basándose en estudios y observaciones de gran calidad técnica, llevados a cabo en estas redes con anterioridad, se ha realizado un exhaustivo análisis que ha permitido determinar las condiciones que debe satisfacer una red patrón. Además, se han diseñado procedimientos técnicos de calibración que permiten calcular la incertidumbre expandida de medida de los instrumentos geodésicos que proporcionan ángulos y distancias obtenidas por métodos electromagnéticos, ya que dichos instrumentos son los que van a permitir la diseminación de la trazabilidad metrológica a las redes patrón para la verificación y calibración de los equipos GNSS. De este modo, ha sido posible la determinación de las correcciones de calibración local de equipos GNSS de alta exactitud en las redes patrón. En esta Tesis se ha obtenido la incertidumbre de la corrección de calibración mediante dos metodologías diferentes; en la primera se ha aplicado la propagación de incertidumbres, mientras que en la segunda se ha aplicado el método de Monte Carlo de simulación de variables aleatorias. El análisis de los resultados obtenidos confirma la validez de ambas metodologías para la determinación de la incertidumbre de calibración de instrumental GNSS. ABSTRACT The studies carried out so far for the determination of the quality of measurement of geodetic instruments have been aimed, primarily, to measure angles and distances. However, in recent years it has been accepted to use GNSS (Global Navigation Satellite System) equipment in the field of Geomatic applications, for data capture, without establishing a methodology that allows obtaining the calibration correction and its uncertainty. The purpose of this Thesis is to establish the requirements that a network must meet to be considered a StandardNetwork with metrological traceability, as well as the methodology for the verification and calibration of GNSS instrumental in those standard networks. To do this, a technical calibration procedure has been designed, developed and defined for GNSS equipment determining the contributions to the uncertainty of measurement. The procedure, which has been applied in different networks for different equipment, has alloweddetermining the expanded uncertainty of such equipment following the recommendations of the Guide to the Expression of Uncertainty in Measurement of the Joint Committee for Guides in Metrology. In addition, the three-dimensional coordinates of the bases which constitute the networks considered in the investigationhave been determined by satellite-based techniques. There have been several developed simulations based on different values of experimental standard deviations of the fixed points that have been used in the least squares vectors or base lines calculations. The results have shown the importance that the knowledge of experimental standard deviations has in the calculation of uncertainties of the three-dimensional coordinates of the bases. Based on high technical quality studies and observations carried out in these networks previously, it has been possible to make an exhaustive analysis that has allowed determining the requirements that a standard network must meet. In addition, technical calibration procedures have been developed to allow the uncertainty estimation of measurement carried outby geodetic instruments that provide angles and distances obtained by electromagnetic methods. These instruments provide the metrological traceability to standard networks used for verification and calibration of GNSS equipment. As a result, it has been possible the estimation of local calibration corrections for high accuracy GNSS equipment in standardnetworks. In this Thesis, the uncertainty of calibration correction has been calculated using two different methodologies: the first one by applying the law of propagation of uncertainty, while the second has applied the propagation of distributions using the Monte Carlo method. The analysis of the obtained results confirms the validity of both methodologies for estimating the calibration uncertainty of GNSS equipment.
Resumo:
Esta Tesis Doctoral presenta las investigaciones y los trabajos desarrollados durante los años 2008 a 2012 para el análisis y diseño de un patrón primario de ruido térmico de banda ancha en tecnología coaxial. Para ubicar esta Tesis en su campo científico es necesario tomar conciencia de que la realización de mediciones fiables y trazables forma parte del sostenimiento del bienestar de una sociedad moderna y juega un papel crítico en apoyo de la competitividad económica, la fabricación y el comercio, así como de la calidad de vida. En el mundo moderno actual, una infraestructura de medición bien desarrollada genera confianza en muchas facetas de nuestra vida diaria, porque nos permite el desarrollo y fabricación de productos fiables, innovadores y de alta calidad; porque sustenta la competitividad de las industrias y su producción sostenible; además de contribuir a la eliminación de barreras técnicas y de dar soporte a un comercio justo, garantizar la seguridad y eficacia de la asistencia sanitaria, y por supuesto, dar respuesta a los grandes retos de la sociedad moderna en temas tan complicados como la energía y el medio ambiente. Con todo esto en mente se ha desarrollado un patrón primario de ruido térmico con el fin de aportar al sistema metrológico español un nuevo patrón primario de referencia capaz de ser usado para desarrollar mediciones fiables y trazables en el campo de la medida y calibración de dispositivos de ruido electromagnético de radiofrecuencia y microondas. Este patrón se ha planteado para que cumpla en el rango de 10 MHz a 26,5 GHz con las siguientes especificaciones: Salida nominal de temperatura de ruido aproximada de ~ 83 K. Incertidumbre de temperatura de ruido menor que ± 1 K en todo su rango de frecuencias. Coeficiente de reflexión en todo su ancho de banda de 0,01 a 26,5 GHz lo más bajo posible. Se ha divido esta Tesis Doctoral en tres partes claramente diferenciadas. La primera de ellas, que comprende los capítulos 1, 2, 3, 4 y 5, presenta todo el proceso de simulaciones y ajustes de los parámetros principales del dispositivo con el fin de dejar definidos los que resultan críticos en su construcción. A continuación viene una segunda parte compuesta por el capítulo 6 en donde se desarrollan los cálculos necesarios para obtener la temperatura de ruido a la salida del dispositivo. La tercera y última parte, capítulo 7, se dedica a la estimación de la incertidumbre de la temperatura de ruido del nuevo patrón primario de ruido obtenida en el capítulo anterior. Más concretamente tenemos que en el capítulo 1 se hace una exhaustiva introducción del entorno científico en donde se desarrolla este trabajo de investigación. Además se detallan los objetivos que se persiguen y se presenta la metodología utilizada para conseguirlos. El capítulo 2 describe la caracterización y selección del material dieléctrico para el anillo del interior de la línea de transmisión del patrón que ponga en contacto térmico los dos conductores del coaxial para igualar las temperaturas entre ambos y mantener la impedancia característica de todo el patrón primario de ruido. Además se estudian las propiedades dieléctricas del nitrógeno líquido para evaluar su influencia en la impedancia final de la línea de transmisión. En el capítulo 3 se analiza el comportamiento de dos cargas y una línea de aire comerciales trabajando en condiciones criogénicas. Se pretende con este estudio obtener la variación que se produce en el coeficiente de reflexión al pasar de temperatura ambiente a criogénica y comprobar si estos dispositivos resultan dañados por trabajar a temperaturas criogénicas; además se estudia si se modifica su comportamiento tras sucesivos ciclos de enfriamiento – calentamiento, obteniendo una cota de la variación para poder así seleccionar la carga que proporcione un menor coeficiente de reflexión y una menor variabilidad. En el capítulo 4 se parte del análisis de la estructura del anillo de material dieléctrico utilizada en la nota técnica NBS 1074 del NIST con el fin de obtener sus parámetros de dispersión que nos servirán para calcular el efecto que produce sobre el coeficiente de reflexión de la estructura coaxial completa. Además se realiza un estudio posterior con el fin de mejorar el diseño de la nota técnica NBS 1074 del NIST, donde se analiza el anillo de material dieléctrico, para posteriormente realizar modificaciones en la geometría de la zona donde se encuentra éste con el fin de reducir la reflexión que produce. Concretamente se estudia el ajuste del radio del conductor interior en la zona del anillo para que presente la misma impedancia característica que la línea. Y para finalizar se obtiene analíticamente la relación entre el radio del conductor interior y el radio de la transición de anillo térmico para garantizar en todo punto de esa transición la misma impedancia característica, manteniendo además criterios de robustez del dispositivo y de fabricación realistas. En el capítulo 5 se analiza el comportamiento térmico del patrón de ruido y su influencia en la conductividad de los materiales metálicos. Se plantean las posibilidades de que el nitrógeno líquido sea exterior a la línea o que éste penetre en su interior. En ambos casos, dada la simetría rotacional del problema, se ha simulado térmicamente una sección de la línea coaxial, es decir, se ha resuelto un problema bidimensional, aunque los resultados son aplicables a la estructura real tridimensional. Para la simulación térmica se ha empleado la herramienta PDE Toolbox de Matlab®. En el capítulo 6 se calcula la temperatura de ruido a la salida del dispositivo. Se parte del estudio de la aportación a la temperatura de ruido final de cada sección que compone el patrón. Además se estudia la influencia de las variaciones de determinados parámetros de los elementos que conforman el patrón de ruido sobre las características fundamentales de éste, esto es, el coeficiente de reflexión a lo largo de todo el dispositivo. Una vez descrito el patrón de ruido electromagnético se procede, en el capítulo 7, a describir los pasos seguidos para estimar la incertidumbre de la temperatura de ruido electromagnético a su salida. Para ello se utilizan dos métodos, el clásico de la guía para la estimación de la incertidumbre [GUM95] y el método de simulación de Monte Carlo. En el capítulo 8 se describen las conclusiones y lo logros conseguidos. Durante el desarrollo de esta Tesis Doctoral se ha obtenido un dispositivo novedoso susceptible de ser patentado, que ha sido registrado en la Oficina Española de Patentes y Marcas (O.E.P.M.) en Madrid, de conformidad con lo establecido en el artículo 20 de la Ley 11/1986, de 20 de Marzo, de Patentes, con el título Patrón Primario de Ruido Térmico de Banda Ancha (Referencia P-101061) con fecha 7 de febrero de 2011. ABSTRACT This Ph. D. work describes a number of investigations that were performed along the years 2008 to 2011, as a preparation for the study and design of a coaxial cryogenic reference noise standard. Reliable and traceable measurement underpins the welfare of a modern society and plays a critical role in supporting economic competitiveness, manufacturing and trade as well as quality of life. In our modern world, a well developed measurement infrastructure gives confidence in many aspects of our daily life, for example by enabling the development and manufacturing of reliable, high quality and innovative products; by supporting industry to be competitive and sustainable in its production; by removing technical barriers to trade and supporting fair trade; by ensuring safety and effectiveness of healthcare; by giving response to the major challenges in key sectors such energy and environment, etc. With all this in mind we have developed a primary standard thermal noise with the aim of providing the Spanish metrology system with a new primary standard for noise reference. This standard will allow development of reliable and traceable measurements in the field of calibration and measurement of electromagnetic noise RF and microwave devices. This standard has been designed to work in the frequency range from 10 MHz to 26.5 GHz, meeting the following specifications: 1. Noise temperature output is to be nominally ~ 83 K. 2. Noise temperature uncertainty less than ± 1 K in the frequency range from 0.01 to 26.5 GHz. 3. Broadband performance requires as low a reflection coefficient as possible from 0.01 to 26.5 GHz. The present Ph. D. work is divided into three clearly differentiated parts. The first one, which comprises Chapters 1 to 5, presents the whole process of simulation and adjustment of the main parameters of the device in order to define those of them which are critical for the manufacturing of the device. Next, the second part consists of Chapter 6 where the necessary computations to obtain the output noise temperature of the device are carried out. The third and last part, Chapter 7, is devoted to the estimation of the uncertainty related to the noise temperature of the noise primary standard as obtained in the preceding chapter. More specifically, Chapter 1 provides a thorough introduction to the scientific and technological environment where this research takes place. It also details the objectives to be achieved and presents the methodology used to achieve them. Chapter 2 describes the characterization and selection of the bead dielectric material inside the transmission line, intended to connect the two coaxial conductors equalizing the temperature between the two of them and thus keeping the characteristic impedance constant for the whole standard. In addition the dielectric properties of liquid nitrogen are analyzed in order to assess their influence on the impedance of the transmission line. Chapter 3 analyzes the behavior of two different loads and of a commercial airline when subjected to cryogenic working conditions. This study is intended to obtain the variation in the reflection coefficient when the temperature changes from room to cryogenic temperature, and to check whether these devices can be damaged as a result of working at cryogenic temperatures. Also we try to see whether the load changes its behavior after successive cycles of cooling / heating, in order to obtain a bound for the allowed variation of the reflection coefficient of the load. Chapter 4 analyzes the ring structure of the dielectric material used in the NBS technical note 1074 of NIST, in order to obtain its scattering parameters that will be used for computation of its effect upon the reflection coefficient of the whole coaxial structure. Subsequently, we perform a further investigation with the aim of improving the design of NBS technical note 1074 of NIST, and modifications are introduced in the geometry of the transition area in order to reduce the reflection it produces. We first analyze the ring, specifically the influence of the radius of inner conductor of the bead, and then make changes in its geometry so that it presents the same characteristic impedance as that of the line. Finally we analytically obtain the relationship between the inner conductor radius and the radius of the transition from ring, in order to ensure the heat flow through the transition thus keeping the same reflection coefficient, and at the same time meeting the robustness requirements and the feasibility of manufacturing. Chapter 5 analyzes the thermal behavior of the noise standard and its influence on the conductivity of metallic materials. Both possibilities are raised that the liquid nitrogen is kept outside the line or that it penetrates inside. In both cases, given the rotational symmetry of the structure, we have simulated a section of coaxial line, i.e. the equivalent two-dimensional problem has been resolved, although the results are applicable to the actual three-dimensional structure. For thermal simulation Matlab™ PDE Toolbox has been used. In Chapter 6 we compute the output noise temperature of the device. The starting point is the analysis of the contribution to the overall noise temperature of each section making up the standard. Moreover the influence of the variations in the parameters of all elements of the standard is analyzed, specifically the variation of the reflection coefficient along the entire device. Once the electromagnetic noise standard has been described and analyzed, in Chapter 7 we describe the steps followed to estimate the uncertainty of the output electromagnetic noise temperature. This is done using two methods, the classic analytical approach following the Guide to the Estimation of Uncertainty [GUM95] and numerical simulations made with the Monte Carlo method. Chapter 8 discusses the conclusions and achievements. During the development of this thesis, a novel device was obtained which was potentially patentable, and which was finally registered through the Spanish Patent and Trademark Office (SPTO) in Madrid, in accordance with the provisions of Article 20 of Law 11/1986 about Patents, dated March 20th, 1986. It was registered under the denomination Broadband Thermal Noise Primary Standard (Reference P-101061) dated February 7th, 2011.
Resumo:
An uncertainty propagation methodology based on Monte Carlo method is applied to PWR nuclear design analysis to assess the impact of nuclear data uncertainties in 235,238 U, 239 Pu and Scattering Thermal Library for Hydrogen in water. This uncertainty analysis is compared with the design and acceptance criteria to assure the adequacy of bounding estimates in safety margins.
Resumo:
An uncertainty propagation methodology based on the Monte Carlo method is applied to PWR nuclear design analysis to assess the impact of nuclear data uncertainties. The importance of the nuclear data uncertainties for 235,238 U, 239 Pu, and the thermal scattering library for hydrogen in water is analyzed. This uncertainty analysis is compared with the design and acceptance criteria to assure the adequacy of bounding estimates in safety margins.