950 resultados para Decomposition of Ranked Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

While state-of-the-art models of Earth's climate system have improved tremendously over the last 20 years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of these models not only with each other but also with empirical models can reveal the space and time scales on which simulation models exploit their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project is contrasted with several empirical models. Both the ENSEMBLES models and a “dynamic climatology” empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The dynamic climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today's state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium-range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extent to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision support. Ultimately, the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress toward that goal, which is not available in model–model intercomparisons.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Simulation of the lifting of dust from the planetary surface is of substantially greater importance on Mars than on Earth, due to the fundamental role that atmospheric dust plays in the former’s climate, yet the dust emission parameterisations used to date in martian global climate models (MGCMs) lag, understandably, behind their terrestrial counterparts in terms of sophistication. Recent developments in estimating surface roughness length over all martian terrains and in modelling atmospheric circulations at regional to local scales (less than O(100 km)) presents an opportunity to formulate an improved wind stress lifting parameterisation. We have upgraded the conventional scheme by including the spatially varying roughness length in the lifting parameterisation in a fully consistent manner (thereby correcting a possible underestimation of the true threshold level for wind stress lifting), and used a modification to account for deviations from neutral stability in the surface layer. Following these improvements, it is found that wind speeds at typical MGCM resolution never reach the lifting threshold at most gridpoints: winds fall particularly short in the southern midlatitudes, where mean roughness is large. Sub-grid scale variability, manifested in both the near-surface wind field and the surface roughness, is then considered, and is found to be a crucial means of bridging the gap between model winds and thresholds. Both forms of small-scale variability contribute to the formation of dust emission ‘hotspots’: areas within the model gridbox with particularly favourable conditions for lifting, namely a smooth surface combined with strong near-surface gusts. Such small-scale emission could in fact be particularly influential on Mars, due both to the intense positive radiative feedbacks that can drive storm growth and a strong hysteresis effect on saltation. By modelling this variability, dust lifting is predicted at the locations at which dust storms are frequently observed, including the flushing storm sources of Chryse and Utopia, and southern midlatitude areas from which larger storms tend to initiate, such as Hellas and Solis Planum. The seasonal cycle of emission, which includes a double-peaked structure in northern autumn and winter, also appears realistic. Significant increases to lifting rates are produced for any sensible choices of parameters controlling the sub-grid distributions used, but results are sensitive to the smallest scale of variability considered, which high-resolution modelling suggests should be O(1 km) or less. Use of such models in future will permit the use of a diagnosed (rather than prescribed) variable gustiness intensity, which should further enhance dust lifting in the southern hemisphere in particular.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, Bond Graphs are employed to develop a novel mathematical model of conventional switched-mode DC-DC converters valid for both continuous and discontinuous conduction modes. A unique causality bond graph model of hybrid models is suggested with the operation of the switch and the diode to be represented by a Modulated Transformer with a binary input and a resistor with fixed conductance causality. The operation of the diode is controlled using an if-then function within the model. The extracted hybrid model is implemented on a Boost and Buck converter with their operations to change from CCM to DCM and to return to CCM. The vector fields of the models show validity in a wide operation area and comparison with the simulation of the converters using PSPICE reveals high accuracy of the proposed model, with the Normalised Root Means Square Error and the Maximum Absolute Error remaining adequately low. The model is also experimentally tested on a Buck topology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The repeated introduction of an organic resource to soil can result in its enhanced degradation. This phenomenon is of primary importance in agroecosystems, where the dynamics of repeated nutrient, pesticide, and herbicide amendment must be understood to achieve optimal yield. Although not yet investigated, the repeated introduction of cadaveric material is an important area of research in forensic science and cemetery planning. It is not currently understood what effects the repeated burial of cadaveric material has on cadaver decomposition or soil processes such as carbon mineralization. To address this gap in knowledge, we conducted a laboratory experiment using ovine (Ovis aries) skeletal muscle tissue (striated muscle used for locomotion) and three contrasting soils (brown earth, rendzina, podsol) from Great Britain. This experiment comprised two stages. In Stage I skeletal muscle tissue (150 g as 1.5 g cubes) was buried in sieved (4.6 mm) soil (10 kg dry weight) calibrated to 60% water holding capacity and allowed to decompose in the dark for 70 days at 22 °C. Control samples comprised soil without skeletal muscle tissue. In Stage II, soils were weighed (100 g dry weight at 60% WHC) into 1285 ml incubation microcosms. Half of the soils were designated for a second tissue amendment, which comprised the burial (2.5 cm) of 1.5 g cube of skeletal muscle tissue. The remaining half of the samples did not receive tissue. Thus, four treatments were used in each soil, reflecting all possible combinations of tissue burial (+) and control (−). Subsequent measures of tissue mass loss, carbon dioxide-carbon evolution, soil microbial biomass carbon, metabolic quotient and soil pH show that repeated burial of skeletal muscle tissue was associated with a significantly greater rate of decomposition in all soils. However, soil microbial biomass following repeated burial was either not significantly different (brown earth, podsol) or significantly less (rendzina) than new gravesoil. Based on these results, we conclude that enhanced decomposition of skeletal muscle tissue was most likely due to the proliferation of zymogenous soil microbes able to better use cadaveric material re-introduced to the soil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract. Three influential theoretical models of OCD focus upon the cognitive factors of inflated responsibility (Salkovskis, 1985), thought-action fusion (Rachman, 1993) and meta-cognitive beliefs (Wells and Matthews, 1994). Little is known about the relevance of these models in adolescents or about the nature of any direct or mediating relationships between these variables and OCD symptoms. This was a cross-sectional correlational design with 223 non-clinical adolescents aged 13 to 16 years. All participants completed questionnaires measuring inflated responsibility, thought-action fusion, meta-cognitive beliefs and obsessive-compulsive symptoms. Inflated responsibility, thought-action fusion and metacognitive beliefs were significantly associated with higher levels of obsessive-compulsive symptoms. These variables accounted for 35% of the variance in obsessive-compulsive symptoms, with inflated responsibility and meta-cognitive beliefs both emerging as significant independent predictors. Inflated responsibility completely mediated the effect of thoughtaction fusion and partially mediated the effect of meta-cognitive beliefs. Support for the downward extension of cognitive models to understanding OCD in a younger population was shown. Findings suggest that inflated responsibility and meta-cognitive beliefs may be particularly important cognitive concepts in OCD. Methodological limitations must be borne in mind and future research is needed to replicate and extend findings in clinical samples. Keywords: Obsessive compulsive disorder, adolescents, cognitive models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cognitive models of obsessive compulsive disorder (OCD) have been influential in understanding and treating the disorder in adults. Cognitive models may also be applicable to children and adolescents and would have important implications for treatment. The aim of this systematic review was to evaluate research that examined the applicability of the cognitive model of OCD to children and adolescents. Inclusion criteria were set broadly but most studies identified included data regarding responsibility appraisals, thought-action fusion or meta-cognitive models of OCD in children or adolescents. Eleven studies were identified in a systematic literature search. Seven studies were with non clinical samples, and 10 studies were cross-sectional. Only one study did not support cognitive models of OCD in children and adolescents and this was with a clinical sample and was the only experimental study. Overall, the results strongly supported the applicability of cognitive models of OCD to children and young people. There were, however, clear gaps in the literature. Future research should include experimental studies, clinical groups, and should test which of the different models provide more explanatory power.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human Body Thermoregulation Models have been widely used in the field of human physiology or thermal comfort studies. However there are few studies on the evaluation method for these models. This paper summarises the existing evaluation methods and critically analyses the flaws. Based on that, a method for the evaluating the accuracy of the Human Body Thermoregulation models is proposed. The new evaluation method contributes to the development of Human Body Thermoregulation models and validates their accuracy both statistically and empirically. The accuracy of different models can be compared by the new method. Furthermore, the new method is not only suitable for the evaluation of Human Body Thermoregulation Models, but also can be theoretically applied to the evaluation of the accuracy of the population-based models in other research fields.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sea-ice concentrations in the Laptev Sea simulated by the coupled North Atlantic-Arctic Ocean-Sea-Ice Model and Finite Element Sea-Ice Ocean Model are evaluated using sea-ice concentrations from Advanced Microwave Scanning Radiometer-Earth Observing System satellite data and a polynya classification method for winter 2007/08. While developed to simulate largescale sea-ice conditions, both models are analysed here in terms of polynya simulation. The main modification of both models in this study is the implementation of a landfast-ice mask. Simulated sea-ice fields from different model runs are compared with emphasis placed on the impact of this prescribed landfast-ice mask. We demonstrate that sea-ice models are not able to simulate flaw polynyas realistically when used without fast-ice description. Our investigations indicate that without landfast ice and with coarse horizontal resolution the models overestimate the fraction of open water in the polynya. This is not because a realistic polynya appears but due to a larger-scale reduction of ice concentrations and smoothed ice-concentration fields. After implementation of a landfast-ice mask, the polynya location is realistically simulated but the total open-water area is still overestimated in most cases. The study shows that the fast-ice parameterization is essential for model improvements. However, further improvements are necessary in order to progress from the simulation of large-scale features in the Arctic towards a more detailed simulation of smaller-scaled features (here polynyas) in an Arctic shelf sea.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phylogenetic comparative methods are increasingly used to give new insights into the dynamics of trait evolution in deep time. For continuous traits the core of these methods is a suite of models that attempt to capture evolutionary patterns by extending the Brownian constant variance model. However, the properties of these models are often poorly understood, which can lead to the misinterpretation of results. Here we focus on one of these models – the Ornstein Uhlenbeck (OU) model. We show that the OU model is frequently incorrectly favoured over simpler models when using Likelihood ratio tests, and that many studies fitting this model use datasets that are small and prone to this problem. We also show that very small amounts of error in datasets can have profound effects on the inferences derived from OU models. Our results suggest that simulating fitted models and comparing with empirical results is critical when fitting OU and other extensions of the Brownian model. We conclude by making recommendations for best practice in fitting OU models in phylogenetic comparative analyses, and for interpreting the parameters of the OU model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We construct and compare in this work a variety of simple models for strange stars, namely, hypothetical self-bound objects made of a cold stable version of the quark-gluon plasma. Exact, quasi-exact and numerical models are examined to find the most economical description for these objects. A simple and successful parametrization of them is given in terms of the central density, and the differences among the models are explicitly shown and discussed. In particular, we present a model starting with a Gaussian ansatz for the density profile that provides a very accurate and almost complete analytical integration of the problem, modulo a small difference for one of the metric potentials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study compressible magnetohydrodynamic turbulence, which holds the key to many astrophysical processes, including star formation and cosmic-ray propagation. To account for the variations of the magnetic field in the strongly turbulent fluid, we use wavelet decomposition of the turbulent velocity field into Alfven, slow, and fast modes, which presents an extension of the Cho & Lazarian decomposition approach based on Fourier transforms. The wavelets allow us to follow the variations of the local direction of the magnetic field and therefore improve the quality of the decomposition compared to the Fourier transforms, which are done in the mean field reference frame. For each resulting component, we calculate the spectra and two-point statistics such as longitudinal and transverse structure functions as well as higher order intermittency statistics. In addition, we perform a Helmholtz-Hodge decomposition of the velocity field into incompressible and compressible parts and analyze these components. We find that the turbulence intermittency is different for different components, and we show that the intermittency statistics depend on whether the phenomenon was studied in the global reference frame related to the mean magnetic field or in the frame defined by the local magnetic field. The dependencies of the measures we obtained are different for different components of the velocity; for instance, we show that while the Alfven mode intermittency changes marginally with the Mach number, the intermittency of the fast mode is substantially affected by the change.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Decomposition was studied in a reciprocal litter transplant experiment to examine the effects of forest type, litter quality and their interaction on leaf decomposition in four tropical forests in south-east Brazil. Litterbags were used to measure decomposition of leaves of one tree species from each forest type: Calophyllum brasiliense from restinga forest; Guapira opposita from Atlantic forest; Esenbeckia leiocarpa from semi-deciduous forest; and Copaifera langsdorffii from cerradao. Decomposition rates in rain forests (Atlantic and restinga) were twice as fast as those in seasonal forests (semi-deciduous and cerradao), suggesting that intensity and distribution of precipitation are important predictors of decomposition rates at regional scales. Decomposition rates varied by species, in the following order: E. leiocarpa > C. langsdorffii > G. opposita > C. brasiliense. However, there was no correlation between decomposition rates and chemical litter quality parameters: C:N, C:P, lignin concentration and lignin:N. The interaction between forest type and litter quality was positive mainly because C. langsdorffii decomposed faster than expected in its native forest. This is a potential indication of a decomposer`s adaptation to specific substrates in a tropical forest. These findings suggest that besides climate, interactions between decomposers and plants might play an essential role in decomposition processes and it must be better understood.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The highly hydrophobic fluorophore Laurdan (6-dodecanoyl-2-(dimethylaminonaphthalene)) has been widely used as a fluorescent probe to monitor lipid membranes. Actually, it monitors the structure and polarity of the bilayer surface, where its fluorescent moiety is supposed to reside. The present paper discusses the high sensitivity of Laurdan fluorescence through the decomposition of its emission spectrum into two Gaussian bands, which correspond to emissions from two different excited states, one more solvent relaxed than the other. It will be shown that the analysis of the area fraction of each band is more sensitive to bilayer structural changes than the largely used parameter called Generalized Polarization, possibly because the latter does not completely separate the fluorescence emission from the two different excited states of Laurdan. Moreover, it will be shown that this decomposition should be done with the spectrum as a function of energy, and not wavelength. Due to the presence of the two emission bands in Laurdan spectrum, fluorescence anisotropy should be measured around 480 nm, to be able to monitor the fluorescence emission from one excited state only, the solvent relaxed state. Laurdan will be used to monitor the complex structure of the anionic phospholipid DMPG (dimyristoyl phosphatidylglycerol) at different ionic strengths, and the alterations caused on gel and fluid membranes due to the interaction of cationic peptides and cholesterol. Analyzing both the emission spectrum decomposition and anisotropy it was possible to distinguish between effects on the packing and on the hydration of the lipid membrane surface. It could be clearly detected that a more potent analog of the melanotropic hormone alpha-MSH (Ac-Ser(1)-Tyr(2)-Ser(3)-Met(4)-Glu(5)-His(6)-Phe(7)-Arg(8)-Trp(9)-Gly(10)-Lys(11)-Pro(12)-Val(13)-NH(2)) was more effective in rigidifying the bilayer surface of fluid membranes than the hormone, though the hormone significantly decreases the bilayer surface hydration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a detailed study of the capacitance spectra obtained from Au/doped-polyaniline/Al structures in the frequency domain (0.05 Hz-10 MHz), and at different temperatures (150-340 K) is carried out. The capacitance spectra behavior in semiconductors can be appropriately described by using abrupt cut-off models, since they assume that the electronic gap states that can follow the ac modulation have response times varying rapidly with a certain abscissa, which is dependent on both temperature and frequency. Two models based on the abrupt cut-off concept, formerly developed to describe inorganic semiconductor devices, have been used to analyze the capacitance spectra of devices based on doped polyaniline (PANI), which is a well-known polymeric semiconductor with innumerous potential technological applications. The application of these models allowed the determination of significant parameters, such as Debye length (approximate to 20 nm), position of bulk Fermi level (approximate to 320 meV) and associated density of states (approximate to 2x10(18) eV(-1) cm(-3)), width of the space charge region (approximate to 70 nm), built-in potential (approximate to 780 meV), and the gap states` distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a generalized leverage matrix useful for the identification of influential units and observations in linear mixed models and show how a decomposition of this matrix may be employed to identify high leverage points for both the marginal fitted values and the random effect component of the conditional fitted values. We illustrate the different uses of the two components of the decomposition with a simulated example as well as with a real data set.