949 resultados para subgrid-scale model
Resumo:
High-permittivity ("high-k") dielectric materials are used in the transistor gate stack in integrated circuits. As the thickness of silicon oxide dielectric reduces below 2 nm with continued downscaling, the leakage current because of tunnelling increases, leading to high power consumption and reduced device reliability. Hence, research concentrates on finding materials with high dielectric constant that can be easily integrated into a manufacturing process and show the desired properties as a thin film. Atomic layer deposition (ALD) is used practically to deposit high-k materials like HfO2, ZrO2, and Al2O3 as gate oxides. ALD is a technique for producing conformal layers of material with nanometer-scale thickness, used commercially in non-planar electronics and increasingly in other areas of science and technology. ALD is a type of chemical vapor deposition that depends on self-limiting surface chemistry. In ALD, gaseous precursors are allowed individually into the reactor chamber in alternating pulses. Between each pulse, inert gas is admitted to prevent gas phase reactions. This thesis provides a profound understanding of the ALD of oxides such as HfO2, showing how the chemistry affects the properties of the deposited film. Using multi-scale modelling of ALD, the kinetics of reactions at the growing surface is connected to experimental data. In this thesis, we use density functional theory (DFT) method to simulate more realistic models for the growth of HfO2 from Hf(N(CH3)2)4/H2O and HfCl4/H2O and for Al2O3 from Al(CH3)3/H2O.Three major breakthroughs are discovered. First, a new reaction pathway, ’multiple proton diffusion’, is proposed for the growth of HfO2 from Hf(N(CH3)2)4/H2O.1 As a second major breakthrough, a ’cooperative’ action between adsorbed precursors is shown to play an important role in ALD. By this we mean that previously-inert fragments can become reactive once sufficient molecules adsorb in their neighbourhood during either precursor pulse. As a third breakthrough, the ALD of HfO2 from Hf(N(CH3)2)4 and H2O is implemented for the first time into 3D on-lattice kinetic Monte-Carlo (KMC).2 In this integrated approach (DFT+KMC), retaining the accuracy of the atomistic model in the higher-scale model leads to remarkable breakthroughs in our understanding. The resulting atomistic model allows direct comparison with experimental techniques such as X-ray photoelectron spectroscopy and quartz crystal microbalance.
Resumo:
A new multi-scale model of brittle fracture growth in an Ag plate with macroscopic dimensions is proposed in which the crack propagation is identified with the stochastic drift-diffusion motion of the crack-tip atom through the material. The model couples molecular dynamics simulations, based on many-body interatomic potentials, with the continuum-based theories of fracture mechanics. The Ito stochastic differential equation is used to advance the tip position on a macroscopic scale before each nano-scale simulation is performed. Well-known crack characteristics, such as the roughening transitions of the crack surfaces, as well as the macroscopic crack trajectories are obtained.
Resumo:
The Weather Research and Forecasting model, integrated online with chemistry module, is a multi-scale model suitable for both research and operational forecasts of meteorology and air quality. It is used by many institutions for a variety of applications. In this study, the WRF v3.5 with chemistry (WRF-Chem) is applied to the area of Poland, for a period of 3-20 July 2006, when high concentrations of ground level ozone were observed. The meteorological and chemistry simulations were initiated with ERA-Interim reanalysis and TNO MACC II emissions database, respectively. The model physical parameterization includes RRTM shortwave radiation, Kain-Fritsch cumulus scheme, Purdue Lin microphysics and ACM2 PBL, established previously as the optimal configuration. Chemical mechanism used for the study was RADM2 with MADE/SORGAM aerosols. Simulations were performed for three one-way nested domains covering Europe (36 km x 36 km), Central Europe (12 km x 12 km) and Poland (4 km x 4 km). The results from the innermost domain were analyzed and compared to measurements of ozone concentration at three stations in different environments. The results show underestimation of observed values and daily amplitude of ozone concentrations.
Resumo:
In den letzten Jahrzehnten haben sich makroskalige hydrologische Modelle als wichtige Werkzeuge etabliert um den Zustand der globalen erneuerbaren Süßwasserressourcen flächendeckend bewerten können. Sie werden heutzutage eingesetzt um eine große Bandbreite wissenschaftlicher Fragestellungen zu beantworten, insbesondere hinsichtlich der Auswirkungen anthropogener Einflüsse auf das natürliche Abflussregime oder der Auswirkungen des globalen Wandels und Klimawandels auf die Ressource Wasser. Diese Auswirkungen lassen sich durch verschiedenste wasserbezogene Kenngrößen abschätzen, wie z.B. erneuerbare (Grund-)Wasserressourcen, Hochwasserrisiko, Dürren, Wasserstress und Wasserknappheit. Die Weiterentwicklung makroskaliger hydrologischer Modelle wurde insbesondere durch stetig steigende Rechenkapazitäten begünstigt, aber auch durch die zunehmende Verfügbarkeit von Fernerkundungsdaten und abgeleiteten Datenprodukten, die genutzt werden können, um die Modelle anzutreiben und zu verbessern. Wie alle makro- bis globalskaligen Modellierungsansätze unterliegen makroskalige hydrologische Simulationen erheblichen Unsicherheiten, die (i) auf räumliche Eingabedatensätze, wie z.B. meteorologische Größen oder Landoberflächenparameter, und (ii) im Besonderen auf die (oftmals) vereinfachte Abbildung physikalischer Prozesse im Modell zurückzuführen sind. Angesichts dieser Unsicherheiten ist es unabdingbar, die tatsächliche Anwendbarkeit und Prognosefähigkeit der Modelle unter diversen klimatischen und physiographischen Bedingungen zu überprüfen. Bisher wurden die meisten Evaluierungsstudien jedoch lediglich in wenigen, großen Flusseinzugsgebieten durchgeführt oder fokussierten auf kontinentalen Wasserflüssen. Dies steht im Kontrast zu vielen Anwendungsstudien, deren Analysen und Aussagen auf simulierten Zustandsgrößen und Flüssen in deutlich feinerer räumlicher Auflösung (Gridzelle) basieren. Den Kern der Dissertation bildet eine umfangreiche Evaluierung der generellen Anwendbarkeit des globalen hydrologischen Modells WaterGAP3 für die Simulation von monatlichen Abflussregimen und Niedrig- und Hochwasserabflüssen auf Basis von mehr als 2400 Durchflussmessreihen für den Zeitraum 1958-2010. Die betrachteten Flusseinzugsgebiete repräsentieren ein breites Spektrum klimatischer und physiographischer Bedingungen, die Einzugsgebietsgröße reicht von 3000 bis zu mehreren Millionen Quadratkilometern. Die Modellevaluierung hat dabei zwei Zielsetzungen: Erstens soll die erzielte Modellgüte als Bezugswert dienen gegen den jegliche weiteren Modellverbesserungen verglichen werden können. Zweitens soll eine Methode zur diagnostischen Modellevaluierung entwickelt und getestet werden, die eindeutige Ansatzpunkte zur Modellverbesserung aufzeigen soll, falls die Modellgüte unzureichend ist. Hierzu werden komplementäre Modellgütemaße mit neun Gebietsparametern verknüpft, welche die klimatischen und physiographischen Bedingungen sowie den Grad anthropogener Beeinflussung in den einzelnen Einzugsgebieten quantifizieren. WaterGAP3 erzielt eine mittlere bis hohe Modellgüte für die Simulation von sowohl monatlichen Abflussregimen als auch Niedrig- und Hochwasserabflüssen, jedoch sind für alle betrachteten Modellgütemaße deutliche räumliche Muster erkennbar. Von den neun betrachteten Gebietseigenschaften weisen insbesondere der Ariditätsgrad und die mittlere Gebietsneigung einen starken Einfluss auf die Modellgüte auf. Das Modell tendiert zur Überschätzung des jährlichen Abflussvolumens mit steigender Aridität. Dieses Verhalten ist charakteristisch für makroskalige hydrologische Modelle und ist auf die unzureichende Abbildung von Prozessen der Abflussbildung und –konzentration in wasserlimitierten Gebieten zurückzuführen. In steilen Einzugsgebieten wird eine geringe Modellgüte hinsichtlich der Abbildung von monatlicher Abflussvariabilität und zeitlicher Dynamik festgestellt, die sich auch in der Güte der Niedrig- und Hochwassersimulation widerspiegelt. Diese Beobachtung weist auf notwendige Modellverbesserungen in Bezug auf (i) die Aufteilung des Gesamtabflusses in schnelle und verzögerte Abflusskomponente und (ii) die Berechnung der Fließgeschwindigkeit im Gerinne hin. Die im Rahmen der Dissertation entwickelte Methode zur diagnostischen Modellevaluierung durch Verknüpfung von komplementären Modellgütemaßen und Einzugsgebietseigenschaften wurde exemplarisch am Beispiel des WaterGAP3 Modells erprobt. Die Methode hat sich als effizientes Werkzeug erwiesen, um räumliche Muster in der Modellgüte zu erklären und Defizite in der Modellstruktur zu identifizieren. Die entwickelte Methode ist generell für jedes hydrologische Modell anwendbar. Sie ist jedoch insbesondere für makroskalige Modelle und multi-basin Studien relevant, da sie das Fehlen von feldspezifischen Kenntnissen und gezielten Messkampagnen, auf die üblicherweise in der Einzugsgebietsmodellierung zurückgegriffen wird, teilweise ausgleichen kann.
Resumo:
The unsaturated zone exerts a major control on the delivery of nutrients to Chalk streams, yet flow and transport processes in this complex, dual-porosity medium have remained controversial. A major challenge arises in characterising these processes, both at the detailed mechanistic level and at an appropriate level for inclusion within catchment-scale models for nutrient management. The lowland catchment research (LOCAR) programme in the UK has provided a unique set of comprehensively instrumented groundwater-dominated catchments. Of these, the Pang and Lambourn, tributaries of the Thames near Reading, have been a particular focus for research into subsurface processes and surface water-groundwater interactions. Data from LOCAR and other sources, along with a new dual permeability numerical model of the Chalk, have been used to explore the relative roles of matrix and fracture flow within the unsaturated zone and resolve conflicting hypotheses of response. From the improved understanding gained through these explorations, a parsimonious conceptualisation of the general response of flow and transport within the Chalk unsaturated zone was formulated. This paper summarises the modelling and data findings of these explorations, and describes the integration of the new simplified unsaturated zone representation with a catchment-scale model of nutrients (INCA), resulting in a new model for catchment-scale flow and transport within Chalk systems: INCA-Chalk. This model is applied to the Lambourn, and results, including hindcast and forecast simulations, are presented. These clearly illustrate the decadal time-scales that need to be considered in the context of nutrient management and the EU Water Framework Directive. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Measurements of anthropogenic tracers such as chlorofluorocarbons and tritium must be quantitatively combined with ocean general circulation models as a component of systematic model development. The authors have developed and tested an inverse method, using a Green's function, to constrain general circulation models with transient tracer data. Using this method chlorofluorocarbon-11 and -12 (CFC-11 and -12) observations are combined with a North Atlantic configuration of the Miami Isopycnic Coordinate Ocean Model with 4/3 degrees resolution. Systematic differences can be seen between the observed CFC concentrations and prior CFC fields simulated by the model. These differences are reduced by the inversion, which determines the optimal gas transfer across the air-sea interface, accounting for uncertainties in the tracer observations. After including the effects of unresolved variability in the CFC fields, the model is found to be inconsistent with the observations because the model/data misfit slightly exceeds the error estimates. By excluding observations in waters ventilated north of the Greenland-Scotland ridge (sigma (0) < 27.82 kg m(-3); shallower than about 2000 m), the fit is improved, indicating that the Nordic overflows are poorly represented in the model. Some systematic differences in the model/data residuals remain and are related, in part, to excessively deep model ventilation near Rockall and deficient ventilation in the main thermocline of the eastern subtropical gyre. Nevertheless, there do not appear to be gross errors in the basin-scale model circulation. Analysis of the CFC inventory using the constrained model suggests that the North Atlantic Ocean shallower than about 2000 m was near 20% saturated in the mid-1990s. Overall, this basin is a sink to 22% of the total atmosphere-to-ocean CFC-11 flux-twice the global average value. The average water mass formation rates over the CFC transient are 7.0 and 6.0 Sv (Sv = 10(6) m(3) s(-1)) for subtropical mode water and subpolar mode water, respectively.
Resumo:
Process-based integrated modelling of weather and crop yield over large areas is becoming an important research topic. The production of the DEMETER ensemble hindcasts of weather allows this work to be carried out in a probabilistic framework. In this study, ensembles of crop yield (groundnut, Arachis hypogaea L.) were produced for 10 2.5 degrees x 2.5 degrees grid cells in western India using the DEMETER ensembles and the general large-area model (GLAM) for annual crops. Four key issues are addressed by this study. First, crop model calibration methods for use with weather ensemble data are assessed. Calibration using yield ensembles was more successful than calibration using reanalysis data (the European Centre for Medium-Range Weather Forecasts 40-yr reanalysis, ERA40). Secondly, the potential for probabilistic forecasting of crop failure is examined. The hindcasts show skill in the prediction of crop failure, with more severe failures being more predictable. Thirdly, the use of yield ensemble means to predict interannual variability in crop yield is examined and their skill assessed relative to baseline simulations using ERA40. The accuracy of multi-model yield ensemble means is equal to or greater than the accuracy using ERA40. Fourthly, the impact of two key uncertainties, sowing window and spatial scale, is briefly examined. The impact of uncertainty in the sowing window is greater with ERA40 than with the multi-model yield ensemble mean. Subgrid heterogeneity affects model accuracy: where correlations are low on the grid scale, they may be significantly positive on the subgrid scale. The implications of the results of this study for yield forecasting on seasonal time-scales are as follows. (i) There is the potential for probabilistic forecasting of crop failure (defined by a threshold yield value); forecasting of yield terciles shows less potential. (ii) Any improvement in the skill of climate models has the potential to translate into improved deterministic yield prediction. (iii) Whilst model input uncertainties are important, uncertainty in the sowing window may not require specific modelling. The implications of the results of this study for yield forecasting on multidecadal (climate change) time-scales are as follows. (i) The skill in the ensemble mean suggests that the perturbation, within uncertainty bounds, of crop and climate parameters, could potentially average out some of the errors associated with mean yield prediction. (ii) For a given technology trend, decadal fluctuations in the yield-gap parameter used by GLAM may be relatively small, implying some predictability on those time-scales.
Resumo:
We investigate the spatial characteristics of urban-like canopy flow by applying particle image velocimetry (PIV) to atmospheric turbulence. The study site was a Comprehensive Outdoor Scale MOdel (COSMO) experiment for urban climate in Japan. The PIV system captured the two-dimensional flow field within the canopy layer continuously for an hour with a sampling frequency of 30 Hz, thereby providing reliable outdoor turbulence statistics. PIV measurements in a wind-tunnel facility using similar roughness geometry, but with a lower sampling frequency of 4 Hz, were also done for comparison. The turbulent momentum flux from COSMO, and the wind tunnel showed similar values and distributions when scaled using friction velocity. Some different characteristics between outdoor and indoor flow fields were mainly caused by the larger fluctuations in wind direction for the atmospheric turbulence. The focus of the analysis is on a variety of instantaneous turbulent flow structures. One remarkable flow structure is termed 'flushing', that is, a large-scale upward motion prevailing across the whole vertical cross-section of a building gap. This is observed intermittently, whereby tracer particles are flushed vertically out from the canopy layer. Flushing phenomena are also observed in the wind tunnel where there is neither thermal stratification nor outer-layer turbulence. It is suggested that flushing phenomena are correlated with the passing of large-scale low-momentum regions above the canopy.
Resumo:
The activation of aerosols to form cloud droplets is dependent upon vertical velocities whose local variability is not typically resolved at the GCM grid scale. Consequently, it is necessary to represent the subgrid-scale variability of vertical velocity in the calculation of cloud droplet number concentration. This study uses the UK Chemistry and Aerosols community model (UKCA) within the Hadley Centre Global Environmental Model (HadGEM3), coupled for the first time to an explicit aerosol activation parameterisation, and hence known as UKCA-Activate. We explore the range of uncertainty in estimates of the indirect aerosol effects attributable to the choice of parameterisation of the subgrid-scale variability of vertical velocity in HadGEM-UKCA. Results of simulations demonstrate that the use of a characteristic vertical velocity cannot replicate results derived with a distribution of vertical velocities, and is to be discouraged in GCMs. This study focuses on the effect of the variance (σw2) of a Gaussian pdf (probability density function) of vertical velocity. Fixed values of σw (spanning the range measured in situ by nine flight campaigns found in the literature) and a configuration in which σw depends on turbulent kinetic energy are tested. Results from the mid-range fixed σw and TKE-based configurations both compare well with observed vertical velocity distributions and cloud droplet number concentrations. The radiative flux perturbation due to the total effects of anthropogenic aerosol is estimated at −1.9 W m−2 with σw = 0.1 m s−1, −2.1 W m−2 with σw derived from TKE, −2.25 W m−2 with σw = 0.4 m s−1, and −2.3 W m−2 with σw = 0.7 m s−1. The breadth of this range is 0.4 W m−2, which is comparable to a substantial fraction of the total diversity of current aerosol forcing estimates. Reducing the uncertainty in the parameterisation of σw would therefore be an important step towards reducing the uncertainty in estimates of the indirect aerosol effects. Detailed examination of regional radiative flux perturbations reveals that aerosol microphysics can be responsible for some climate-relevant radiative effects, highlighting the importance of including microphysical aerosol processes in GCMs.
Resumo:
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
Resumo:
The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.
Resumo:
We present ocean model sensitivity experiments aimed at separating the influence of the projected changes in the “thermal” (near-surface air temperature) and “wind” (near-surface winds) forcing on the patterns of sea level and ocean heat content. In the North Atlantic, the distribution of sea level change is more due to the “thermal” forcing, whereas it is more due to the “wind” forcing in the North Pacific; in the Southern Ocean, the “thermal” and “wind” forcing have a comparable influence. In the ocean adjacent to Antarctica the “thermal” forcing leads to an inflow of warmer waters on the continental shelves, which is somewhat attenuated by the “wind” forcing. The structure of the vertically integrated heat uptake is set by different processes at low and high latitudes: at low latitudes it is dominated by the heat transport convergence, whereas at high latitudes it represents a small residual of changes in the surface flux and advection of heat. The structure of the horizontally integrated heat content tendency is set by the increase of downward heat flux by the mean circulation and comparable decrease of upward heat flux by the subgrid-scale processes; the upward eddy heat flux decreases and increases by almost the same magnitude in response to, respectively, the “thermal” and “wind” forcing. Regionally, the surface heat loss and deep convection weaken in the Labrador Sea, but intensify in the Greenland Sea in the region of sea ice retreat. The enhanced heat flux anomaly in the subpolar Atlantic is mainly caused by the “thermal” forcing.
A bivariate regression model for matched paired survival data: local influence and residual analysis
Resumo:
The use of bivariate distributions plays a fundamental role in survival and reliability studies. In this paper, we consider a location scale model for bivariate survival times based on the proposal of a copula to model the dependence of bivariate survival data. For the proposed model, we consider inferential procedures based on maximum likelihood. Gains in efficiency from bivariate models are also examined in the censored data setting. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the bivariate regression model for matched paired survival data. Sensitivity analysis methods such as local and total influence are presented and derived under three perturbation schemes. The martingale marginal and the deviance marginal residual measures are used to check the adequacy of the model. Furthermore, we propose a new measure which we call modified deviance component residual. The methodology in the paper is illustrated on a lifetime data set for kidney patients.
Resumo:
The laboratory model is considered in this thesis. Information gained from this investigation has not been transferred to the larger industrial machines. Some of the factors noted concerning the efficiency of the laboratory shaking table are inherent in this small scale model only.
Resumo:
The Princeton Ocean Model is used to study the circulation features in the Pearl River Estuary and their responses to tide, river discharge, wind, and heat flux in the winter dry and summer wet seasons. The model has an orthogonal curvilinear grid in the horizontal plane with variable spacing from 0.5 km in the estuary to 1 km on the shelf and 15 sigma levels in the vertical direction. The initial conditions and the subtidal open boundary forcing are obtained from an associated larger-scale model of the northern South China Sea. Buoyancy forcing uses the climatological monthly heat fluxes and river discharges, and both the climatological monthly wind and the realistic wind are used in the sensitivity experiments. The tidal forcing is represented by sinusoidal functions with the observed amplitudes and phases. In this paper, the simulated tide is first examined. The simulated seasonal distributions of the salinity, as well as the temporal variations of the salinity and velocity over a tidal cycle are described and then compared with the in situ survey data from July 1999 and January 2000. The model successfully reproduces the main hydrodynamic processes, such as the stratification, mixing, frontal dynamics, summer upwelling, two-layer gravitational circulation, etc., and the distributions of hydrodynamic parameters in the Pearl River Estuary and coastal waters for both the winter and the summer season.