941 resultados para SPICE simulations
Resumo:
The purpose of this evaluation project was to describe the integration of simulation into a nursing internship program and to help prepare new graduate nurses for patient care. Additionally, learning styles and perceptions of active learning, collaboration among peers, ways of learning, expectation of simulation, satisfaction, self-confidence, and design of simulation were examined. [See PDF for complete abstract]
Resumo:
Correct estimation of the firn lock-in depth is essential for correctly linking gas and ice chronologies in ice core studies. Here, two approaches to constrain the firn depth evolution in Antarctica are presented over the last deglaciation: outputs of a firn densification model, and measurements of δ15N of N2 in air trapped in ice core, assuming that δ15N is only affected by gravitational fractionation in the firn column. Since the firn densification process is largely governed by surface temperature and accumulation rate, we have investigated four ice cores drilled in coastal (Berkner Island, BI, and James Ross Island, JRI) and semi-coastal (TALDICE and EPICA Dronning Maud Land, EDML) Antarctic regions. Combined with available ice core air-δ15N measurements from the EPICA Dome C (EDC) site, the studied regions encompass a large range of surface accumulation rates and temperature conditions. Our δ15N profiles reveal a heterogeneous response of the firn structure to glacial–interglacial climatic changes. While firn densification simulations correctly predict TALDICE δ15N variations, they systematically fail to capture the large millennial-scale δ15N variations measured at BI and the δ15N glacial levels measured at JRI and EDML – a mismatch previously reported for central East Antarctic ice cores. New constraints of the EDML gas–ice depth offset during the Laschamp event (~41 ka) and the last deglaciation do not favour the hypothesis of a large convective zone within the firn as the explanation of the glacial firn model–δ15N data mismatch for this site. While we could not conduct an in-depth study of the influence of impurities in snow for firnification from the existing datasets, our detailed comparison between the δ15N profiles and firn model simulations under different temperature and accumulation rate scenarios suggests that the role of accumulation rate may have been underestimated in the current description of firnification models.
Resumo:
Understanding natural climate variability and its driving factors is crucial to assessing future climate change. Therefore, comparing proxy-based climate reconstructions with forcing factors as well as comparing these with paleoclimate model simulations is key to gaining insights into the relative roles of internal versus forced variability. A review of the state of modelling of the climate of the last millennium prior to the CMIP5–PMIP3 (Coupled Model Intercomparison Project Phase 5–Paleoclimate Modelling Intercomparison Project Phase 3) coordinated effort is presented and compared to the available temperature reconstructions. Simulations and reconstructions broadly agree on reproducing the major temperature changes and suggest an overall linear response to external forcing on multidecadal or longer timescales. Internal variability is found to have an important influence at hemispheric and global scales. The spatial distribution of simulated temperature changes during the transition from the Medieval Climate Anomaly to the Little Ice Age disagrees with that found in the reconstructions. Thus, either internal variability is a possible major player in shaping temperature changes through the millennium or the model simulations have problems realistically representing the response pattern to external forcing. A last millennium transient climate response (LMTCR) is defined to provide a quantitative framework for analysing the consistency between simulated and reconstructed climate. Beyond an overall agreement between simulated and reconstructed LMTCR ranges, this analysis is able to single out specific discrepancies between some reconstructions and the ensemble of simulations. The disagreement is found in the cases where the reconstructions show reduced covariability with external forcings or when they present high rates of temperature change.
Resumo:
The development of northern high-latitude peatlands played an important role in the carbon (C) balance of the land biosphere since the Last Glacial Maximum (LGM). At present, carbon storage in northern peatlands is substantial and estimated to be 500 ± 100 Pg C (1 Pg C = 1015 g C). Here, we develop and apply a peatland module embedded in a dynamic global vegetation and land surface process model (LPX-Bern 1.0). The peatland module features a dynamic nitrogen cycle, a dynamic C transfer between peatland acrotelm (upper oxic layer) and catotelm (deep anoxic layer), hydrology- and temperature-dependent respiration rates, and peatland specific plant functional types. Nitrogen limitation down-regulates average modern net primary productivity over peatlands by about half. Decadal acrotelm-to-catotelm C fluxes vary between −20 and +50 g C m−2 yr−1 over the Holocene. Key model parameters are calibrated with reconstructed peat accumulation rates from peat-core data. The model reproduces the major features of the peat core data and of the observation-based modern circumpolar soil carbon distribution. Results from a set of simulations for possible evolutions of northern peat development and areal extent show that soil C stocks in modern peatlands increased by 365–550 Pg C since the LGM, of which 175–272 Pg C accumulated between 11 and 5 kyr BP. Furthermore, our simulations suggest a persistent C sequestration rate of 35–50 Pg C per 1000 yr in present-day peatlands under current climate conditions, and that this C sink could either sustain or turn towards a source by 2100 AD depending on climate trajectories as projected for different representative greenhouse gas concentration pathways.
Resumo:
Changes in Greenland accumulation and the stability in the relationship between accumulation variability and large-scale circulation are assessed by performing time-slice simulations for the present day, the preindustrial era, the early Holocene, and the Last Glacial Maximum (LGM) with a comprehensive climate model. The stability issue is an important prerequisite for reconstructions of Northern Hemisphere atmospheric circulation variability based on accumulation or precipitation proxy records from Greenland ice cores. The analysis reveals that the relationship between accumulation variability and large-scale circulation undergoes a significant seasonal cycle. As the contributions of the individual seasons to the annual signal change, annual mean accumulation variability is not necessarily related to the same atmospheric circulation patterns during the different climate states. Interestingly, within a season, local Greenland accumulation variability is indeed linked to a consistent circulation pattern, which is observed for all studied climate periods, even for the LGM. Hence, it would be possible to deduce a reliable reconstruction of seasonal atmospheric variability (e.g., for North Atlantic winters) if an accumulation or precipitation proxy were available that resolves single seasons. We further show that the simulated impacts of orbital forcing and changes in the ice sheet topography on Greenland accumulation exhibit strong spatial differences, emphasizing that accumulation records from different ice core sites regarding both interannual and long-term (centennial to millennial) variability cannot be expected to look alike since they include a distinct local signature. The only uniform signal to external forcing is the strong decrease in Greenland accumulation during glacial (LGM) conditions and an increase associated with the recent rise in greenhouse gas concentrations.
Resumo:
With the observation that stochasticity is important in biological systems, chemical kinetics have begun to receive wider interest. While the use of Monte Carlo discrete event simulations most accurately capture the variability of molecular species, they become computationally costly for complex reaction-diffusion systems with large populations of molecules. On the other hand, continuous time models are computationally efficient but they fail to capture any variability in the molecular species. In this study a hybrid stochastic approach is introduced for simulating reaction-diffusion systems. We developed an adaptive partitioning strategy in which processes with high frequency are simulated with deterministic rate-based equations, and those with low frequency using the exact stochastic algorithm of Gillespie. Therefore the stochastic behavior of cellular pathways is preserved while being able to apply it to large populations of molecules. We describe our method and demonstrate its accuracy and efficiency compared with the Gillespie algorithm for two different systems. First, a model of intracellular viral kinetics with two steady states and second, a compartmental model of the postsynaptic spine head for studying the dynamics of Ca+2 and NMDA receptors.
Resumo:
Monte Carlo simulations arrive at their results by introducing randomness, sometimes derived from a physical randomizing device. Nonetheless, we argue, they open no new epistemic channels beyond that already employed by traditional simulations: the inference by ordinary argumentation of conclusions from assumptions built into the simulations. We show that Monte Carlo simulations cannot produce knowledge other than by inference, and that they resemble other computer simulations in the manner in which they derive their conclusions. Simple examples of Monte Carlo simulations are analysed to identify the underlying inferences.
Resumo:
Monte Carlo simulation is a powerful method in many natural and social sciences. But what sort of method is it? And where does its power come from? Are Monte Carlo simulations experiments, theories or something else? The aim of this talk is to answer these questions and to explain the power of Monte Carlo simulations. I provide a classification of Monte Carlo techniques and defend the claim that Monte Carlo simulation is a sort of inference.
Resumo:
Optimized regional climate simulations are conducted using the Polar MM5, a version of the fifth-generation Pennsylvania State University-NCAR Mesoscale Model (MM5), with a 60-km horizontal resolution domain over North America during the Last Glacial Maximum (LGM, 21 000 calendar years ago), when much of the continent was covered by the Laurentide Ice Sheet (LIS). The objective is to describe the LGM annual cycle at high spatial resolution with an emphasis on the winter atmospheric circulation. Output from a tailored NCAR Community Climate Model version 3 (CCM3) simulation of the LGM climate is used to provide the initial and lateral boundary conditions for Polar MM5. LGM boundary conditions include continental ice sheets, appropriate orbital forcing, reduced CO2 concentration, paleovegetation, modified sea surface temperatures, and lowered sea level. Polar MM5 produces a substantially different atmospheric response to the LGM boundary conditions than CCM3 and other recent GCM simulations. In particular, from November to April the upper-level flow is split around a blocking anticyclone over the LIS, with a northern branch over the Canadian Arctic and a southern branch impacting southern North America. The split flow pattern is most pronounced in January and transitions into a single, consolidated jet stream that migrates northward over the LIS during summer. Sensitivity experiments indicate that the winter split flow in Polar MM5 is primarily due to mechanical forcing by LIS, although model physics and resolution also contribute to the simulated flow configuration. Polar MM5 LGM results are generally consistent with proxy climate estimates in the western United States, Alaska, and the Canadian Arctic and may help resolve some long-standing discrepancies between proxy data and previous simulations of the LGM climate.
Resumo:
The Princeton Ocean Model is used to study the circulation features in the Pearl River Estuary and their responses to tide, river discharge, wind, and heat flux in the winter dry and summer wet seasons. The model has an orthogonal curvilinear grid in the horizontal plane with variable spacing from 0.5 km in the estuary to 1 km on the shelf and 15 sigma levels in the vertical direction. The initial conditions and the subtidal open boundary forcing are obtained from an associated larger-scale model of the northern South China Sea. Buoyancy forcing uses the climatological monthly heat fluxes and river discharges, and both the climatological monthly wind and the realistic wind are used in the sensitivity experiments. The tidal forcing is represented by sinusoidal functions with the observed amplitudes and phases. In this paper, the simulated tide is first examined. The simulated seasonal distributions of the salinity, as well as the temporal variations of the salinity and velocity over a tidal cycle are described and then compared with the in situ survey data from July 1999 and January 2000. The model successfully reproduces the main hydrodynamic processes, such as the stratification, mixing, frontal dynamics, summer upwelling, two-layer gravitational circulation, etc., and the distributions of hydrodynamic parameters in the Pearl River Estuary and coastal waters for both the winter and the summer season.
Resumo:
We present a conceptual prototype model of a focal plane array unit for the STEAMR instrument, highlighting the challenges presented by the required high relative beam proximity of the instrument and focus on how edge-diffraction effects contribute to the array's performance. The analysis was carried out as a comparative process using both PO & PTD and MoM techniques. We first highlight general differences between these computational techniques, with the discussion focusing on diffractive edge effects for near-field imaging reflectors with high truncation. We then present the results of in-depth modeling analyses of the STEAMR focal plane array followed by near-field antenna measurements of a breadboard model of the array. The results of these near-field measurements agree well with both simulation techniques although MoM shows slightly higher complex beam coupling to the measurements than PO & PTD.
Resumo:
Peptide dendrimers are synthetic tree-like molecules composed of amino acids. There are at least two kinds of preferential structural behaviors exhibited by these molecules, which acquire either compact or noncompact shapes. However, the key structural determinants of such behaviors remained, until now, unstudied. Herein, we conduct a comprehensive investigation of the structural determinants of peptide dendrimers by employing long molecular dynamics simulations to characterize an extended set of third generation dendrimers. Our results clearly show that a trade-off between electrostatic effects and hydrogen bond formation controls structure acquisition in these systems. Moreover, by selectively changing the dendrimers charge we are able to manipulate the exhibited compactness. In contrast, the length of branching residues does not seem to be a major structural determinant. Our results are in accordance with the most recent experimental evidence and shed some light on the key molecular level interactions controlling structure acquisition in these systems. Thus, the results presented constitute valuable insights that can contribute to the development of truly tailor-made dendritic systems.