43 resultados para time of simulation
Resumo:
The ability to undertake repeat measurements of flow-mediated dilatation (FMD) within a short time of a previous measurement would be useful to improve accuracy or to repeat a failed initial procedure. Although standard methods report that a minimum of 10 min is required between measurements, there is no published data to support this. Thirty healthy volunteers had five FMD measurements performed within a 2-h period, separated by various time intervals (5, 15 and 30 min). In 19 volunteers, FMD was also performed as soon as the vessel had returned to its baseline diameter. There was no significant difference between any of the FMD measurements or parameters across the visits indicating that repeat measurements may be taken after a minimum of 5 min or as soon as the vessel has returned to its baseline diameter, which in some subjects may be less than 5 min.
Resumo:
Six land surface models and five global hydrological models participate in a model intercomparison project (WaterMIP), which for the first time compares simulation results of these different classes of models in a consistent way. In this paper the simulation setup is described and aspects of the multi-model global terrestrial water balance are presented. All models were run at 0.5 degree spatial resolution for the global land areas for a 15-year period (1985-1999) using a newly-developed global meteorological dataset. Simulated global terrestrial evapotranspiration, excluding Greenland and Antarctica, ranges from 415 to 586 mm year-1 (60,000 to 85,000 km3 year-1) and simulated runoff ranges from 290 to 457 mm year-1 (42,000 to 66,000 km3 year-1). Both the mean and median runoff fractions for the land surface models are lower than those of the global hydrological models, although the range is wider. Significant simulation differences between land surface and global hydrological models are found to be caused by the snow scheme employed. The physically-based energy balance approach used by land surface models generally results in lower snow water equivalent values than the conceptual degree-day approach used by global hydrological models. Some differences in simulated runoff and evapotranspiration are explained by model parameterizations, although the processes included and parameterizations used are not distinct to either land surface models or global hydrological models. The results show that differences between model are major sources of uncertainty. Climate change impact studies thus need to use not only multiple climate models, but also some other measure of uncertainty, (e.g. multiple impact models).
Resumo:
Variations in the Atlantic Meridional Overturning Circulation (MOC) exert an important influence on climate, particularly on decadal time scales. Simulation of the MOC in coupled climate models is compromised, to a degree that is unknown, by their lack of fidelity in resolving some of the key processes involved. There is an overarching need to increase the resolution and fidelity of climate models, but also to assess how increases in resolution influence the simulation of key phenomena such as the MOC. In this study we investigate the impact of significantly increasing the (ocean and atmosphere) resolution of a coupled climate model on the simulation of MOC variability by comparing high and low resolution versions of the same model. In both versions, decadal variability of the MOC is closely linked to density anomalies that propagate from the Labrador Sea southward along the deep western boundary. We demonstrate that the MOC adjustment proceeds more rapidly in the higher resolution model due the increased speed of western boundary waves. However, the response of the Atlantic Sea Surface Temperatures (SSTs) to MOC variations is relatively robust - in pattern if not in magnitude - across the two resolutions. The MOC also excites a coupled ocean-atmosphere response in the tropical Atlantic in both model versions. In the higher resolution model, but not the lower resolution model, there is evidence of a significant response in the extratropical atmosphere over the North Atlantic 6 years after a maximum in the MOC. In both models there is evidence of a weak negative feedback on deep density anomalies in the Labrador Sea, and hence on the MOC (with a time scale of approximately ten years). Our results highlight the need for further work to understand the decadal variability of the MOC and its simulation in climate models.
Resumo:
This chapter aims to provide an overview of building simulation in a theoretical and practical context. The following sections demonstrate the importance of simulation programs at a time when society is shifting towards a low carbon future and the practice of sustainable design becomes mandatory. The initial sections acquaint the reader with basic terminology and comment on the capabilities and categories of simulation tools before discussing the historical development of programs. The main body of the chapter considers the primary benefits and users of simulation programs, looks at the role of simulation in the construction process and examines the validity and interpretation of simulation results. The latter half of the chapter looks at program selection and discusses software capability, product characteristics, input data and output formats. The inclusion of a case study demonstrates the simulation procedure and key concepts. Finally, the chapter closes with a sight into the future, commenting on the development of simulation capability, user interfaces and how simulation will continue to empower building professionals as society faces new challenges in a rapidly changing landscape.
Resumo:
This paper presents practical approaches to the problem of sample size re-estimation in the case of clinical trials with survival data when proportional hazards can be assumed. When data are readily available at the time of the review, on a full range of survival experiences across the recruited patients, it is shown that, as expected, performing a blinded re-estimation procedure is straightforward and can help to maintain the trial's pre-specified error rates. Two alternative methods for dealing with the situation where limited survival experiences are available at the time of the sample size review are then presented and compared. In this instance, extrapolation is required in order to undertake the sample size re-estimation. Worked examples, together with results from a simulation study are described. It is concluded that, as in the standard case, use of either extrapolation approach successfully protects the trial error rates. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
The rapid-distortion model of Hunt & Graham (1978) for the initial distortion of turbulence by a flat boundary is extended to account fully for viscous processes. Two types of boundary are considered: a solid wall and a free surface. The model is shown to be formally valid provided two conditions are satisfied. The first condition is that time is short compared with the decorrelation time of the energy-containing eddies, so that nonlinear processes can be neglected. The second condition is that the viscous layer near the boundary, where tangential motions adjust to the boundary condition, is thin compared with the scales of the smallest eddies. The viscous layer can then be treated using thin-boundary-layer methods. Given these conditions, the distorted turbulence near the boundary is related to the undistorted turbulence, and thence profiles of turbulence dissipation rate near the two types of boundary are calculated and shown to agree extremely well with profiles obtained by Perot & Moin (1993) by direct numerical simulation. The dissipation rates are higher near a solid wall than in the bulk of the flow because the no-slip boundary condition leads to large velocity gradients across the viscous layer. In contrast, the weaker constraint of no stress at a free surface leads to the dissipation rate close to a free surface actually being smaller than in the bulk of the flow. This explains why tangential velocity fluctuations parallel to a free surface are so large. In addition we show that it is the adjustment of the large energy-containing eddies across the viscous layer that controls the dissipation rate, which explains why rapid-distortion theory can give quantitatively accurate values for the dissipation rate. We also find that the dissipation rate obtained from the model evaluated at the time when the model is expected to fail actually yields useful estimates of the dissipation obtained from the direct numerical simulation at times when the nonlinear processes are significant. We conclude that the main role of nonlinear processes is to arrest growth by linear processes of the viscous layer after about one large-eddy turnover time.
Resumo:
The cold equatorial SST bias in the tropical Pacific that is persistent in many coupled OAGCMs severely impacts the fidelity of the simulated climate and variability in this key region, such as the ENSO phenomenon. The classical bias analysis in these models usually concentrates on multi-decadal to centennial time series needed to obtain statistically robust features. Yet, this strategy cannot fully explain how the models errors were generated in the first place. Here, we use seasonal re-forecasts (hindcasts) to track back the origin of this cold bias. As such hindcasts are initialized close to observations, the transient drift leading to the cold bias can be analyzed to distinguish pre-existing errors from errors responding to initial ones. A time sequence of processes involved in the advent of the final mean state errors can then be proposed. We apply this strategy to the ENSEMBLES-FP6 project multi-model hindcasts of the last decades. Four of the five AOGCMs develop a persistent equatorial cold tongue bias within a few months. The associated systematic errors are first assessed separately for the warm and cold ENSO phases. We find that the models are able to reproduce either El Niño or La Niña close to observations, but not both. ENSO composites then show that the spurious equatorial cooling is maximum for El Niño years for the February and August start dates. For these events and at this time of the year, zonal wind errors in the equatorial Pacific are present from the beginning of the simulation and are hypothesized to be at the origin of the equatorial cold bias, generating too strong upwelling conditions. The systematic underestimation of the mixed layer depth in several models can also amplify the growth of the SST bias. The seminal role of these zonal wind errors is further demonstrated by carrying out ocean-only experiments forced by the AOCGCMs daily 10-meter wind. In a case study, we show that for several models, this forcing is sufficient to reproduce the main SST error patterns seen after 1 month in the AOCGCM hindcasts.
Resumo:
Geomagnetic activity has long been known to exhibit approximately 27 day periodicity, resulting from solar wind structures repeating each solar rotation. Thus a very simple near-Earth solar wind forecast is 27 day persistence, wherein the near-Earth solar wind conditions today are assumed to be identical to those 27 days previously. Effective use of such a persistence model as a forecast tool, however, requires the performance and uncertainty to be fully characterized. The first half of this study determines which solar wind parameters can be reliably forecast by persistence and how the forecast skill varies with the solar cycle. The second half of the study shows how persistence can provide a useful benchmark for more sophisticated forecast schemes, namely physics-based numerical models. Point-by-point assessment methods, such as correlation and mean-square error, find persistence skill comparable to numerical models during solar minimum, despite the 27 day lead time of persistence forecasts, versus 2–5 days for numerical schemes. At solar maximum, however, the dynamic nature of the corona means 27 day persistence is no longer a good approximation and skill scores suggest persistence is out-performed by numerical models for almost all solar wind parameters. But point-by-point assessment techniques are not always a reliable indicator of usefulness as a forecast tool. An event-based assessment method, which focusses key solar wind structures, finds persistence to be the most valuable forecast throughout the solar cycle. This reiterates the fact that the means of assessing the “best” forecast model must be specifically tailored to its intended use.
Resumo:
In the 1960s North Atlantic sea surface temperatures (SST) cooled rapidly. The magnitude of the cooling was largest in the North Atlantic subpolar gyre (SPG), and was coincident with a rapid freshening of the SPG. Here we analyze hindcasts of the 1960s North Atlantic cooling made with the UK Met Office’s decadal prediction system (DePreSys), which is initialised using observations. It is shown that DePreSys captures—with a lead time of several years—the observed cooling and freshening of the North Atlantic SPG. DePreSys also captures changes in SST over the wider North Atlantic and surface climate impacts over the wider region, such as changes in atmospheric circulation in winter and sea ice extent. We show that initialisation of an anomalously weak Atlantic Meridional Overturning Circulation (AMOC), and hence weak northward heat transport, is crucial for DePreSys to predict the magnitude of the observed cooling. Such an anomalously weak AMOC is not captured when ocean observations are not assimilated (i.e. it is not a forced response in this model). The freshening of the SPG is also dominated by ocean salt transport changes in DePreSys; in particular, the simulation of advective freshwater anomalies analogous to the Great Salinity Anomaly were key. Therefore, DePreSys suggests that ocean dynamics played an important role in the cooling of the North Atlantic in the 1960s, and that this event was predictable.
Resumo:
Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.
Resumo:
While state-of-the-art models of Earth's climate system have improved tremendously over the last 20 years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of these models not only with each other but also with empirical models can reveal the space and time scales on which simulation models exploit their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project is contrasted with several empirical models. Both the ENSEMBLES models and a “dynamic climatology” empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The dynamic climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today's state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium-range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extent to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision support. Ultimately, the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress toward that goal, which is not available in model–model intercomparisons.
Resumo:
Nutrient enrichment and drought conditions are major threats to lowland rivers causing ecosystem degradation and composition changes in plant communities. The controls on primary producer composition in chalk rivers are investigated using a new model and existing data from the River Frome (UK) to explore abiotic and biotic interactions. The growth and interaction of four primary producer functional groups (suspended algae, macrophytes, epiphytes, sediment biofilm) were successfully linked with flow, nutrients (N, P), light and water temperature such that the modelled biomass dynamics of the four groups matched that of the observed. Simulated growth of suspended algae was limited mainly by the residence time of the river rather than in-stream phosphorus concentrations. The simulated growth of the fixed vegetation (macrophytes, epiphytes, sediment biofilm) was overwhelmingly controlled by incoming solar radiation and light attenuation in the water column. Nutrients and grazing have little control when compared to the other physical controls in the simulations. A number of environmental threshold values were identified in the model simulations for the different producer types. The simulation results highlighted the importance of the pelagic–benthic interactions within the River Frome and indicated that process interaction defined the behaviour of the primary producers, rather than a single, dominant driver. The model simulations pose interesting questions to be considered in the next iteration of field- and laboratory based studies.
Validation of a priori CME arrival predictions made using real-time heliospheric imager observations
Resumo:
Between December 2010 and March 2013, volunteers for the Solar Stormwatch (SSW) Citizen Science project have identified and analyzed coronal mass ejections (CMEs) in the near real-time Solar Terrestrial Relations Observatory Heliospheric Imager observations, in order to make “Fearless Forecasts” of CME arrival times and speeds at Earth. Of the 60 predictions of Earth-directed CMEs, 20 resulted in an identifiable Interplanetary CME (ICME) at Earth within 1.5–6 days, with an average error in predicted transit time of 22 h, and average transit time of 82.3 h. The average error in predicting arrival speed is 151 km s−1, with an average arrival speed of 425km s−1. In the same time period, there were 44 CMEs for which there are no corresponding SSW predictions, and there were 600 days on which there was neither a CME predicted nor observed. A number of metrics show that the SSW predictions do have useful forecast skill; however, there is still much room for improvement. We investigate potential improvements by using SSW inputs in three models of ICME propagation: two of constant acceleration and one of aerodynamic drag. We find that taking account of interplanetary acceleration can improve the average errors of transit time to 19 h and arrival speed to 77 km s−1.