950 resultados para Decomposition of Ranked Models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Plant litter and fine roots are important in maintaining soil organic carbon (C) levels as well as for nutrient cycling. The decomposition of surface-placed litter and fine roots of wheat ( Triticum aestivum ), lucerne ( Medicago sativa ), buffel grass ( Cenchrus ciliaris ), and mulga ( Acacia aneura ), placed at 10-cm and 30-cm depths, was studied in the field in a Rhodic Paleustalf. After 2 years, = 60% of mulga roots and twigs remained undecomposed. The rate of decomposition varied from 4.2 year -1 for wheat roots to 0.22 year -1 for mulga twigs, which was significantly correlated with the lignin concentration of both tops and roots. Aryl+O-aryl C concentration, as measured by 13 C nuclear magnetic resonance spectroscopy, was also significantly correlated with the decomposition parameters, although with a lower R 2 value than the lignin concentration. Thus, lignin concentration provides a good predictor of litter and fine root decomposition in the field.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bistability arises within a wide range of biological systems from the A phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. in this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current Physiologically based pharmacokinetic (PBPK) models are inductive. We present an additional, different approach that is based on the synthetic rather than the inductive approach to modeling and simulation. It relies on object-oriented programming A model of the referent system in its experimental context is synthesized by assembling objects that represent components such as molecules, cells, aspects of tissue architecture, catheters, etc. The single pass perfused rat liver has been well described in evaluating hepatic drug pharmacokinetics (PK) and is the system on which we focus. In silico experiments begin with administration of objects representing actual compounds. Data are collected in a manner analogous to that in the referent PK experiments. The synthetic modeling method allows for recognition and representation of discrete event and discrete time processes, as well as heterogeneity in organization, function, and spatial effects. An application is developed for sucrose and antipyrine, administered separately and together PBPK modeling has made extensive progress in characterizing abstracted PK properties but this has also been its limitation. Now, other important questions and possible extensions emerge. How are these PK properties and the observed behaviors generated? The inherent heuristic limitations of traditional models have hindered getting meaningful, detailed answers to such questions. Synthetic models of the type described here are specifically intended to help answer such questions. Analogous to wet-lab experimental models, they retain their applicability even when broken apart into sub-components. Having and applying this new class of models along with traditional PK modeling methods is expected to increase the productivity of pharmaceutical research at all levels that make use of modeling and simulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mesoporous chromium oxide (Cr2O3) nanocrystals were first synthesized by the thermal decomposition reaction of Cr(NO3)(3)(circle)9H(2)O using citric acid monohydrate (CA) as the mesoporous template agent. The texture and chemistry of chromium oxide nanocrystals were characterized by N-2 adsorption-desorption isotherms, FTIR, X-ray diffraction (XRD), UV-vis, and thermoanalytical methods. It was shown that the hydrate water and CA are the crucial factors in influencing the formation of mesoporous Cr2O3 nanocrystals in the mixture system. The decomposition of CA results in the formation of a mesoporous structure with wormlike pores. The hydrate water of the mixture provides surface hydroxyls that act as binders, making the nanocrystals aggregate. The pore structures and phases of chromium oxide are affected by the ratio of precursor-to-CA, thermal temperature, and time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Access to Allied Psychological Services component of Australia's Better Outcomes in Mental Health Care program enables eligible general practitioners to refer consumers to allied health professionals for affordable, evidence-based mental health care, via 108 projects conducted by Divisions of General Practice. The current study profiled the models of service delivery across these projects, and examined whether particular models were associated with differential levels of access to services. We found: 76% of projects were retaining their allied health professionals under contract, 28% via direct employment, and 7% some other way; Allied health professionals were providing services from GPs' rooms in 63% of projects, from their own rooms in 63%, from a third location in 42%; and The referral mechanism of choice was direct referral in 51% of projects, a voucher system in 27%, a brokerage system in 24%, and a register system in 25%. Many of these models were being used in combination. No model was predictive of differential levels of access, suggesting that the approach of adapting models to the local context is proving successful.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On a global scale basalts from mid-ocean ridges are strikingly more homogeneous than basalts from intraplate volcanism. The observed geochemical heterogeneity argues strongly for the existence of distinct reservoirs in the Earth's mantle. It is an unresolved problem of Geodynamics as to how these findings can be reconciled with large-scale convection. We review observational constraints, and investigate stirring properties of numerical models of mantle convection. Conditions in the early Earth may have supported layered convection with rapid stirring in the upper layers. Material that has been altered near the surface is transported downwards by small-scale convection. Thereby a layer of homogeneous depleted material develops above pristine mantle. As the mantle cools over Earth history, the effects leading to layering become reduced and models show the large-scale convection favoured for the Earth today. Laterally averaged, the upper mantle below the lithosphere is least affected by material that has experienced near-surface differentiation. The geochemical signature obtained during the previous episode of small-scale convection may be preserved there for the longest time. Additionally, stirring is less effective in the high viscosity layer of the central lower mantle [1, 2], supporting the survival of medium-scale heterogeneities there. These models are the first, using 3-d spherical geometry and mostly Earth-like parameters, to address the suggested change of convective style. Although the models are still far from reproducing our planet, we find that proposal might be helpful towards reconciling geochemical and geophysical constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deformable models are a highly accurate and flexible approach to segmenting structures in medical images. The primary drawback of deformable models is that they are sensitive to initialisation, with accurate and robust results often requiring initialisation close to the true object in the image. Automatically obtaining a good initialisation is problematic for many structures in the body. The cartilages of the knee are a thin elastic material that cover the ends of the bone, absorbing shock and allowing smooth movement. The degeneration of these cartilages characterize the progression of osteoarthritis. The state of the art in the segmentation of the cartilage are 2D semi-automated algorithms. These algorithms require significant time and supervison by a clinical expert, so the development of an automatic segmentation algorithm for the cartilages is an important clinical goal. In this paper we present an approach towards this goal that allows us to automatically providing a good initialisation for deformable models of the patella cartilage, by utilising the strong spatial relationship of the cartilage to the underlying bone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Experiments with simulators allow psychologists to better understand the causes of human errors and build models of cognitive processes to be used in human reliability assessment (HRA). This paper investigates an approach to task failure analysis based on patterns of behaviour, by contrast to more traditional event-based approaches. It considers, as a case study, a formal model of an air traffic control (ATC) system which incorporates controller behaviour. The cognitive model is formalised in the CSP process algebra. Patterns of behaviour are expressed as temporal logic properties. Then a model-checking technique is used to verify whether the decomposition of the operator's behaviour into patterns is sound and complete with respect to the cognitive model. The decomposition is shown to be incomplete and a new behavioural pattern is identified, which appears to have been overlooked in the analysis of the data provided by the experiments with the simulator. This illustrates how formal analysis of operator models can yield fresh insights into how failures may arise in interactive systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is currently considerable interest in developing general non-linear density models based on latent, or hidden, variables. Such models have the ability to discover the presence of a relatively small number of underlying `causes' which, acting in combination, give rise to the apparent complexity of the observed data set. Unfortunately, to train such models generally requires large computational effort. In this paper we introduce a novel latent variable algorithm which retains the general non-linear capabilities of previous models but which uses a training procedure based on the EM algorithm. We demonstrate the performance of the model on a toy problem and on data from flow diagnostics for a multi-phase oil pipeline.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Numerous studies find that monetary models of exchange rates cannot beat a random walk model. Such a finding, however, is not surprising given that such models are built upon money demand functions and traditional money demand functions appear to have broken down in many developed countries. In this article, we investigate whether using a more stable underlying money demand function results in improvements in forecasts of monetary models of exchange rates. More specifically, we use a sweep-adjusted measure of US monetary aggregate M1 which has been shown to have a more stable money demand function than the official M1 measure. The results suggest that the monetary models of exchange rates contain information about future movements of exchange rates, but the success of such models depends on the stability of money demand functions and the specifications of the models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This preliminary report describes work carried out as part of work package 1.2 of the MUCM research project. The report is split in two parts: the ?rst part (Sections 1 and 2) summarises the state of the art in emulation of computer models, while the second presents some initial work on the emulation of dynamic models. In the ?rst part, we describe the basics of emulation, introduce the notation and put together the key results for the emulation of models with single and multiple outputs, with or without the use of mean function. In the second part, we present preliminary results on the chaotic Lorenz 63 model. We look at emulation of a single time step, and repeated application of the emulator for sequential predic- tion. After some design considerations, the emulator is compared with the exact simulator on a number of runs to assess its performance. Several general issues related to emulating dynamic models are raised and discussed. Current work on the larger Lorenz 96 model (40 variables) is presented in the context of dimension reduction, with results to be provided in a follow-up report. The notation used in this report are summarised in appendix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recent method for phase equilibria, the AGAPE method, has been used to predict activity coefficients and excess Gibbs energy for binary mixtures with good accuracy. The theory, based on a generalised London potential (GLP), accounts for intermolecular attractive forces. Unlike existing prediction methods, for example UNIFAC, the AGAPE method uses only information derived from accessible experimental data and molecular information for pure components. Presently, the AGAPE method has some limitations, namely that the mixtures must consist of small, non-polar compounds with no hydrogen bonding, at low moderate pressures and at conditions below the critical conditions of the components. Distinction between vapour-liquid equilibria and gas-liquid solubility is rather arbitrary and it seems reasonable to extend these ideas to solubility. The AGAPE model uses a molecular lattice-based mixing rule. By judicious use of computer programs a methodology was created to examine a body of experimental gas-liquid solubility data for gases such as carbon dioxide, propane, n-butane or sulphur hexafluoride which all have critical temperatures a little above 298 K dissolved in benzene, cyclo-hexane and methanol. Within this methodology the value of the GLP as an ab initio combining rule for such solutes in very dilute solutions in a variety of liquids has been tested. Using the GLP as a mixing rule involves the computation of rotationally averaged interactions between the constituent atoms, and new calculations have had to be made to discover the magnitude of the unlike pair interactions. These numbers have been seen as significant in their own right in the context of the behaviour of infinitely-dilute solutions. A method for extending this treatment to "permanent" gases has also been developed. The findings from the GLP method and from the more general AGAPE approach have been examined in the context of other models for gas-liquid solubility, both "classical" and contemporary, in particular those derived from equations-of-state methods and from reference solvent methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The thermal oxidation of two model compounds representing the aromatic polyamide, MXD6 (poly m-xylylene adipamide) have been investigated. The model compounds (having different chemical structures, viz, one corresponding to the aromatic part of the chain and the other to the aliphatic part), based on the structure of MXD6 were prepared and reactions with different concentrations of cobalt ions examined with the aim of identifying the role of the different structural components of MXD6 on the mechanism of oxidation. The study showed that cobalt, in the presence of sodium phosphite (which acts as an antioxidant for MXD6 and the model compounds), increases the oxidation of the model compounds. It is believed that the cobalt acts predominantly as a catalyst for the decomposition of hydroperoxides, formed during oxidation of the models in the melt phase, to free radical products and to a lesser extent as a catalyst for the initiation of the oxidation reaction by complex formation with the amide, which is more likely to take place in the solid phase. An oxidation cycle has been proposed consisting of two parts both of which will occur, to some extent under all conditions of oxidation (in the melt and in the solid phase), but their individual predominance must be determined by the prevailing oxygen pressure at the reaction site. The different aspects of this proposed mechanism were examined from extensive model compound studies, and the evidence based on the nature of product formation and the kinetics of these reactions. Main techniques used to compare the rates of oxidation and the study of kinetics included, oxygen absorption, FT-IR, UV and TGA. HPLC was used for product separation and identification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Geometric information relating to most engineering products is available in the form of orthographic drawings or 2D data files. For many recent computer based applications, such as Computer Integrated Manufacturing (CIM), these data are required in the form of a sophisticated model based on Constructive Solid Geometry (CSG) concepts. A recent novel technique in this area transfers 2D engineering drawings directly into a 3D solid model called `the first approximation'. In many cases, however, this does not represent the real object. In this thesis, a new method is proposed and developed to enhance this model. This method uses the notion of expanding an object in terms of other solid objects, which are either primitive or first approximation models. To achieve this goal, in addition to the prepared subroutine to calculate the first approximation model of input data, two other wireframe models are found for extraction of sub-objects. One is the wireframe representation on input, and the other is the wireframe of the first approximation model. A new fast method is developed for the latter special case wireframe, which is named the `first approximation wireframe model'. This method avoids the use of a solid modeller. Detailed descriptions of algorithms and implementation procedures are given. In these techniques utilisation of dashed line information is also considered in improving the model. Different practical examples are given to illustrate the functioning of the program. Finally, a recursive method is employed to automatically modify the output model towards the real object. Some suggestions for further work are made to increase the domain of objects covered, and provide a commercially usable package. It is concluded that the current method promises the production of accurate models for a large class of objects.