950 resultados para Decomposition of Ranked Models
Resumo:
The thermal decomposition of the complex K-4[Ni(NO2)6]center dot H2O has been investigated over the temperature range 25-600 degrees C by a combination of infrared spectroscopy, powder X-ray diffraction, FAB-mass spectrometry and elemental analysis. The first stage of reaction is loss of water and isomerisation of one of the coordinated nitro groups to form the complex K-4 [Ni(NO2)(4) (ONO)]center dot NO2. At temperatures around 200 degrees C the remaining nitro groups within the complex isomerise to the chelating nitrite form and this process acts as a precursor to the loss of NO2 gas at temperatures above 270 degrees C. The product, which is stable up to 600 degrees C, is the complex K-4[Ni(ONO)(4)]center dot NO2, where the nickel atom is formally in the +1 oxidation state. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We analyze the publicly released outputs of the simulations performed by climate models (CMs) in preindustrial (PI) and Special Report on Emissions Scenarios A1B (SRESA1B) conditions. In the PI simulations, most CMs feature biases of the order of 1 W m −2 for the net global and the net atmospheric, oceanic, and land energy balances. This does not result from transient effects but depends on the imperfect closure of the energy cycle in the fluid components and on inconsistencies over land. Thus, the planetary emission temperature is underestimated, which may explain the CMs' cold bias. In the PI scenario, CMs agree on the meridional atmospheric enthalpy transport's peak location (around 40°N/S), while discrepancies of ∼20% exist on the intensity. Disagreements on the oceanic transport peaks' location and intensity amount to ∼10° and ∼50%, respectively. In the SRESA1B runs, the atmospheric transport's peak shifts poleward, and its intensity increases up to ∼10% in both hemispheres. In most CMs, the Northern Hemispheric oceanic transport decreases, and the peaks shift equatorward in both hemispheres. The Bjerknes compensation mechanism is active both on climatological and interannual time scales. The total meridional transport peaks around 35° in both hemispheres and scenarios, whereas disagreements on the intensity reach ∼20%. With increased CO 2 concentration, the total transport increases up to ∼10%, thus contributing to polar amplification of global warming. Advances are needed for achieving a self-consistent representation of climate as a nonequilibrium thermodynamical system. This is crucial for improving the CMs' skill in representing past and future climate changes.
Resumo:
The present study investigates the initiation of precipitating deep convection in an ensemble of convection-resolving mesoscale models. Results of eight different model runs from five non-hydrostatic models are compared for a case of the Convective and Orographically-induced Precipitation Study (COPS). An isolated convective cell initiated east of the Black Forest crest in southwest Germany, although convective available potential energy was only moderate and convective inhibition was high. Measurements revealed that, due to the absence of synoptic forcing, convection was initiated by local processes related to the orography. In particular, the lifting by low-level convergence in the planetary boundary layer is assumed to be the dominant process on that day. The models used different configurations as well as different initial and boundary conditions. By comparing the different model performance with each other and with measurements, the processes which need to be well represented to initiate convection at the right place and time are discussed. Besides an accurate specification of the thermodynamic and kinematic fields, the results highlight the role of boundary-layer convergence features for quantitative precipitation forecasts in mountainous terrain.
Resumo:
Various methods of assessment have been applied to the One Dimensional Time to Explosion (ODTX) apparatus and experiments with the aim of allowing an estimate of the comparative violence of the explosion event to be made. Non-mechanical methods used were a simple visual inspection, measuring the increase in the void volume of the anvils following an explosion and measuring the velocity of the sound produced by the explosion over 1 metre. Mechanical methods used included monitoring piezo-electric devices inserted in the frame of the machine and measuring the rotational velocity of a rotating bar placed on the top of the anvils after it had been displaced by the shock wave. This last method, which resembles original Hopkinson Bar experiments, seemed the easiest to apply and analyse, giving relative rankings of violence and the possibility of the calculation of a “detonation” pressure.
Resumo:
A One-Dimensional Time to Explosion (ODTX) apparatus has been used to study the times to explosion of a number of compositions based on RDX and HMX over a range of contact temperatures. The times to explosion at any given temperature tend to increase from RDX to HMX and with the proportion of HMX in the composition. Thermal ignition theory has been applied to time to explosion data to calculate kinetic parameters. The apparent activation energy for all of the compositions lay between 127 kJ mol−1 and 146 kJ mol−1. There were big differences in the pre-exponential factor and this controlled the time to explosion rather than the activation energy for the process.
Resumo:
This study evaluated the effects of fat and sugar levels on the surface properties of Lactobacillus rhamnosus GG during storage in food model systems, simulating yogurt and ice cream, and related them with the ability of the bacterial cells to adhere to Caco-2 cells. Freeze-dried L. rhamnosus GG cells were added to the model food systems and stored for 7 days. The bacterial cells were analyzed for cell viability, hydrophobicity, ζ potential, and their ability to adhere to Caco-2 cells. The results indicated that the food type and its composition affected the surface and adhesion properties of the bacterial cells during storage, with yogurt being a better delivery vehicle than ice cream in terms of bacterial adhesion to Caco-2 cells. The most important factor influencing bacterial adhesion was the storage time rather than the levels of fats and sugars, indicating that conformational changes were taking place on the surface of the bacterial cells during storage.
Resumo:
Tropical Cyclone (TC) is normally not studied at the individual level with Global Climate Models (GCMs), because the coarse grid spacing is often deemed insufficient for a realistic representation of the basic underlying processes. GCMs are indeed routinely deployed at low resolution, in order to enable sufficiently long integrations, which means that only large-scale TC proxies are diagnosed. A new class of GCMs is emerging, however, which is capable of simulating TC-type vortexes by retaining a horizontal resolution similar to that of operational NWP GCMs; their integration on the latest supercomputers enables the completion of long-term integrations. The UK-Japan Climate Collaboration and the UK-HiGEM projects have developed climate GCMs which can be run routinely for decades (with grid spacing of 60 km) or centuries (with grid spacing of 90 km); when coupled to the ocean GCM, a mesh of 1/3 degrees provides eddy-permitting resolution. The 90 km resolution model has been developed entirely by the UK-HiGEM consortium (together with its 1/3 degree ocean component); the 60 km atmospheric GCM has been developed by UJCC, in collaboration with the Met Office Hadley Centre.
Resumo:
This paper extends the singular value decomposition to a path of matricesE(t). An analytic singular value decomposition of a path of matricesE(t) is an analytic path of factorizationsE(t)=X(t)S(t)Y(t) T whereX(t) andY(t) are orthogonal andS(t) is diagonal. To maintain differentiability the diagonal entries ofS(t) are allowed to be either positive or negative and to appear in any order. This paper investigates existence and uniqueness of analytic SVD's and develops an algorithm for computing them. We show that a real analytic pathE(t) always admits a real analytic SVD, a full-rank, smooth pathE(t) with distinct singular values admits a smooth SVD. We derive a differential equation for the left factor, develop Euler-like and extrapolated Euler-like numerical methods for approximating an analytic SVD and prove that the Euler-like method converges.
Resumo:
Scoring rules are an important tool for evaluating the performance of probabilistic forecasting schemes. A scoring rule is called strictly proper if its expectation is optimal if and only if the forecast probability represents the true distribution of the target. In the binary case, strictly proper scoring rules allow for a decomposition into terms related to the resolution and the reliability of a forecast. This fact is particularly well known for the Brier Score. In this article, this result is extended to forecasts for finite-valued targets. Both resolution and reliability are shown to have a positive effect on the score. It is demonstrated that resolution and reliability are directly related to forecast attributes that are desirable on grounds independent of the notion of scores. This finding can be considered an epistemological justification of measuring forecast quality by proper scoring rules. A link is provided to the original work of DeGroot and Fienberg, extending their concepts of sufficiency and refinement. The relation to the conjectured sharpness principle of Gneiting, et al., is elucidated.
Resumo:
References (20)Cited By (1)Export CitationAboutAbstract Proper scoring rules provide a useful means to evaluate probabilistic forecasts. Independent from scoring rules, it has been argued that reliability and resolution are desirable forecast attributes. The mathematical expectation value of the score allows for a decomposition into reliability and resolution related terms, demonstrating a relationship between scoring rules and reliability/resolution. A similar decomposition holds for the empirical (i.e. sample average) score over an archive of forecast–observation pairs. This empirical decomposition though provides a too optimistic estimate of the potential score (i.e. the optimum score which could be obtained through recalibration), showing that a forecast assessment based solely on the empirical resolution and reliability terms will be misleading. The differences between the theoretical and empirical decomposition are investigated, and specific recommendations are given how to obtain better estimators of reliability and resolution in the case of the Brier and Ignorance scoring rule.
Resumo:
The extra-tropical response to El Niño in configurations of a coupled model with increased horizontal resolution in the oceanic component is shown to be more realistic than in configurations with a low resolution oceanic component. This general conclusion is independent of the atmospheric resolution. Resolving small-scale processes in the ocean produces a more realistic oceanic mean state, with a reduced cold tongue bias, which in turn allows the atmospheric model component to be forced more realistically. A realistic atmospheric basic state is critical in order to represent Rossby wave propagation in response to El Niño, and hence the extra-tropical response to El Niño. Through the use of high and low resolution configurations of the forced atmospheric-only model component we show that, in isolation, atmospheric resolution does not significantly affect the simulation of the extra-tropical response to El Niño. It is demonstrated, through perturbations to the SST forcing of the atmospheric model component, that biases in the climatological SST field typical of coupled model configurations with low oceanic resolution can account for the erroneous atmospheric basic state seen in these coupled model configurations. These results highlight the importance of resolving small-scale oceanic processes in producing a realistic large-scale mean climate in coupled models, and suggest that it might may be possible to “squeeze out” valuable extra performance from coupled models through increases to oceanic resolution alone.
Resumo:
There is large uncertainty about the magnitude of warming and how rainfall patterns will change in response to any given scenario of future changes in atmospheric composition and land use. The models used for future climate projections were developed and calibrated using climate observations from the past 40 years. The geologic record of environmental responses to climate changes provides a unique opportunity to test model performance outside this limited climate range. Evaluation of model simulations against palaeodata shows that models reproduce the direction and large-scale patterns of past changes in climate, but tend to underestimate the magnitude of regional changes. As part of the effort to reduce model-related uncertainty and produce more reliable estimates of twenty-first century climate, the Palaeoclimate Modelling Intercomparison Project is systematically applying palaeoevaluation techniques to simulations of the past run with the models used to make future projections. This evaluation will provide assessments of model performance, including whether a model is sufficiently sensitive to changes in atmospheric composition, as well as providing estimates of the strength of biosphere and other feedbacks that could amplify the model response to these changes and modify the characteristics of climate variability.