65 resultados para The bilinear method

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently the Balanced method was introduced as a class of quasi-implicit methods for solving stiff stochastic differential equations. We examine asymptotic and mean-square stability for several implementations of the Balanced method and give a generalized result for the mean-square stability region of any Balanced method. We also investigate the optimal implementation of the Balanced method with respect to strong convergence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Direct Simulation Monte Carlo (DSMC) method is used to simulate the flow of rarefied gases. In the Macroscopic Chemistry Method (MCM) for DSMC, chemical reaction rates calculated from local macroscopic flow properties are enforced in each cell. Unlike the standard total collision energy (TCE) chemistry model for DSMC, the new method is not restricted to an Arrhenius form of the reaction rate coefficient, nor is it restricted to a collision cross-section which yields a simple power-law viscosity. For reaction rates of interest in aerospace applications, chemically reacting collisions are generally infrequent events and, as such, local equilibrium conditions are established before a significant number of chemical reactions occur. Hence, the reaction rates which have been used in MCM have been calculated from the reaction rate data which are expected to be correct only for conditions of thermal equilibrium. Here we consider artificially high reaction rates so that the fraction of reacting collisions is not small and propose a simple method of estimating the rates of chemical reactions which can be used in the Macroscopic Chemistry Method in both equilibrium and non-equilibrium conditions. Two tests are presented: (1) The dissociation rates under conditions of thermal non-equilibrium are determined from a zero-dimensional Monte-Carlo sampling procedure which simulates ‘intra-modal’ non-equilibrium; that is, equilibrium distributions in each of the translational, rotational and vibrational modes but with different temperatures for each mode; (2) The 2-D hypersonic flow of molecular oxygen over a vertical plate at Mach 30 is calculated. In both cases the new method produces results in close agreement with those given by the standard TCE model in the same highly nonequilibrium conditions. We conclude that the general method of estimating the non-equilibrium reaction rate is a simple means by which information contained within non-equilibrium distribution functions predicted by the DSMC method can be included in the Macroscopic Chemistry Method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a numerical technique for the design of an RF coil for asymmetric magnetic resonance imaging (MRI) systems. The formulation is based on an inverse approach where the cylindrical surface currents are expressed in terms of a combination of sub-domain basis functions: triangular and pulse functions. With the homogeneous transverse magnetic field specified in a spherical region, a functional method is applied to obtain the unknown current coefficients. The current distribution is then transformed to a conductor pattern by use of a stream function technique. Preliminary MR images acquired using a prototype RF coil are presented and validate the design method. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding the genetic architecture of quantitative traits can greatly assist the design of strategies for their manipulation in plant-breeding programs. For a number of traits, genetic variation can be the result of segregation of a few major genes and many polygenes (minor genes). The joint segregation analysis (JSA) is a maximum-likelihood approach for fitting segregation models through the simultaneous use of phenotypic information from multiple generations. Our objective in this paper was to use computer simulation to quantify the power of the JSA method for testing the mixed-inheritance model for quantitative traits when it was applied to the six basic generations: both parents (P-1 and P-2), F-1, F-2, and both backcross generations (B-1 and B-2) derived from crossing the F-1 to each parent. A total of 1968 genetic model-experiment scenarios were considered in the simulation study to quantify the power of the method. Factors that interacted to influence the power of the JSA method to correctly detect genetic models were: (1) whether there were one or two major genes in combination with polygenes, (2) the heritability of the major genes and polygenes, (3) the level of dispersion of the major genes and polygenes between the two parents, and (4) the number of individuals examined in each generation (population size). The greatest levels of power were observed for the genetic models defined with simple inheritance; e.g., the power was greater than 90% for the one major gene model, regardless of the population size and major-gene heritability. Lower levels of power were observed for the genetic models with complex inheritance (major genes and polygenes), low heritability, small population sizes and a large dispersion of favourable genes among the two parents; e.g., the power was less than 5% for the two major-gene model with a heritability value of 0.3 and population sizes of 100 individuals. The JSA methodology was then applied to a previously studied sorghum data-set to investigate the genetic control of the putative drought resistance-trait osmotic adjustment in three crosses. The previous study concluded that there were two major genes segregating for osmotic adjustment in the three crosses. Application of the JSA method resulted in a change in the proposed genetic model. The presence of the two major genes was confirmed with the addition of an unspecified number of polygenes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most finite element packages use the Newmark algorithm for time integration of structural dynamics. Various algorithms have been proposed to better optimize the high frequency dissipation of this algorithm. Hulbert and Chung proposed both implicit and explicit forms of the generalized alpha method. The algorithms optimize high frequency dissipation effectively, and despite recent work on algorithms that possess momentum conserving/energy dissipative properties in a non-linear context, the generalized alpha method remains an efficient way to solve many problems, especially with adaptive timestep control. However, the implicit and explicit algorithms use incompatible parameter sets and cannot be used together in a spatial partition, whereas this can be done for the Newmark algorithm, as Hughes and Liu demonstrated, and for the HHT-alpha algorithm developed from it. The present paper shows that the explicit generalized alpha method can be rewritten so that it becomes compatible with the implicit form. All four algorithmic parameters can be matched between the explicit and implicit forms. An element interface between implicit and explicit partitions can then be used, analogous to that devised by Hughes and Liu to extend the Newmark method. The stability of the explicit/implicit algorithm is examined in a linear context and found to exceed that of the explicit partition. The element partition is significantly less dissipative of intermediate frequencies than one using the HHT-alpha method. The explicit algorithm can also be rewritten so that the discrete equation of motion evaluates forces from displacements and velocities found at the predicted mid-point of a cycle. Copyright (C) 2003 John Wiley Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Expectancies about the outcomes of alcohol consumption are widely accepted as important determinants of drinking. This construct is increasingly recognized as a significant element of psychological interventions for alcohol-related problems. Much effort has been invested in producing reliable and valid instruments to measure this construct for research and clinical purposes, but very few have had their factor structure subjected to adequate validation. Among them, the Drinking Expectancies Questionnaire (DEQ) was developed to address some theoretical and design issues with earlier expectancy scales. Exploratory factor analyses, in addition to validity and reliability analyses, were performed when the original questionnaire was developed. The object of this study was to undertake a confirmatory analysis of the factor structure of the DEQ. Method: Confirmatory factor analysis through LISREL 8 was performed using a randomly split sample of 679 drinkers. Results: Results suggested that a new 5-factor model, which differs slightly from the original 6-factor version, was a more robust measure of expectancies. A new method of scoring the DEQ consistent with this factor structure is presented. Conclusions: The present study shows more robust psychometric properties of the DEQ using the new factor structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper evaluates a new, low-frequency finite-difference time-domain method applied to the problem of induced E-fields/eddy currents in the human body resulting from the pulsed magnetic field gradients in MRI. In this algorithm, a distributed equivalent magnetic current is proposed as the electromagnetic source and is obtained by quasistatic calculation of the empty coil's vector potential or measurements therein. This technique circumvents the discretization of complicated gradient coil geometries into a mesh of Yee cells, and thereby enables any type of gradient coil modelling or other complex low frequency sources. The proposed method has been verified against an example with an analytical solution. Results are presented showing the spatial distribution of gradient-induced electric fields in a multi-layered spherical phantom model and a complete body model. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: The aim of this study was to assess the discriminatory power and potential turn around time ( TAT) of a PCR-based method for the detection of methicillin-resistant Staphylococcus aureus (MRSA) from screening swabs. Methods: Screening swabs were examined using the current laboratory protocol of direct culture on mannitol salt agar supplemented with oxacillin (MSAO-direct). The PCR method involved pre-incubation in broth for 4 hours followed by a multiplex PCR with primers directed to mecA and nuc genes of MRSA. The reference standard was determined by pre-incubation in broth for 4 hours followed by culture on MSAO (MSAO-broth). Results: A total of 256 swabs was analysed. The rates of detection of MRSA using MSAO-direct, MSAO-broth and PCR were 10.2, 13.3 and 10.2%, respectively. For PCR, the sensitivity, specificity, positive predictive value and negative predictive values were 66.7% (95% CI 51.9 - 83.3%), 98.6% ( 95% CI 97.1 - 100%), 84.6% ( 95% CI 76.2 - 100%) and 95.2% ( 95% CI 92.4 - 98.0%), respectively, and these results were almost identical to those obtained from MSAO-direct. The agreement between MSAO-direct and PCR was 61.5% ( 95% CI 42.8 - 80.2%) for positive results, 95.6% ( 95% CI 93.0 - 98.2%) for negative results and overall was 92.2% ( 95% CI 88.9 - 95.5%). Conclusions: ( 1) The discriminatory power of PCR and MSAO-direct is similar but the level of agreement, especially for true positive results, is low. ( 2) The potential TAT for the PCR method provides a marked advantage over conventional methods. ( 3) Further modifications to the PCR method such as increased broth incubation time, use of selective broth and adaptation to real-time PCR may lead to improvement in sensitivity and TAT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ann of this study was to investigate the incorporation of a model antigen, fluorescently labelled ovalbumin (FITC-OVA), into various colloidal particles including immune stimulating complexes (ISCOMs), liposomes, ring and worm-like micelles, lamellae and lipidic/layered structures that are formed from various combinations of the triterpene saponin Quil A, cholesterol and phosphatidylethanolamine (PE) following hydration of PE/cholesterol lipid films with aqueous Solutions of Quil A. Colloidal dispersions of these three components were also prepared by the dialysis method for comparison. FITC-OVA was conjugated with palmitic acid (P) and PE to produce P-FITC-OVA and PE-FITC-OVA, respectively. Both P-FITC-OVA and PE-FITC-OVA could be incorporated in all colloidal structures whereas FITC-OVA was incorporated only into liposomes. The incorporation of PE-FITC-OVA into all colloidal structures was significantly higher than P-FITC-OVA (P < 0.05). The degree of incorporation of protein was in the order: ring and worm-like micelles < liposomes and lipidic/layered structures < ISCOMs and lamellae. The incorporation of protein into the various particles prepared by the lipid film hydration method was similar to those for colloidal particles prepared by the dialysis method (provided both methods lead to the formation of the same colloidal structures). In the case of different colloidal structures arising due to the preparation method, differences in encapsulation efficiency were found (P < 0.05) for formulations with the same polar lipid composition. This study demonstrates that the various colloidal particles formed as a result of hydrating PE/cholesterol lipid films with different amounts of Quil A are capable of incorporating antigen, provided it is amphipathic. Some of these colloidal particles may be used as effective vaccine delivery systems. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we apply a new method for the determination of surface area of carbonaceous materials, using the local surface excess isotherms obtained from the Grand Canonical Monte Carlo simulation and a concept of area distribution in terms of energy well-depth of solid–fluid interaction. The range of this well-depth considered in our GCMC simulation is from 10 to 100 K, which is wide enough to cover all carbon surfaces that we dealt with (for comparison, the well-depth for perfect graphite surface is about 58 K). Having the set of local surface excess isotherms and the differential area distribution, the overall adsorption isotherm can be obtained in an integral form. Thus, given the experimental data of nitrogen or argon adsorption on a carbon material, the differential area distribution can be obtained from the inversion process, using the regularization method. The total surface area is then obtained as the area of this distribution. We test this approach with a number of data in the literature, and compare our GCMC-surface area with that obtained from the classical BET method. In general, we find that the difference between these two surface areas is about 10%, indicating the need to reliably determine the surface area with a very consistent method. We, therefore, suggest the approach of this paper as an alternative to the BET method because of the long-recognized unrealistic assumptions used in the BET theory. Beside the surface area obtained by this method, it also provides information about the differential area distribution versus the well-depth. This information could be used as a microscopic finger-print of the carbon surface. It is expected that samples prepared from different precursors and different activation conditions will have distinct finger-prints. We illustrate this with Cabot BP120, 280 and 460 samples, and the differential area distributions obtained from the adsorption of argon at 77 K and nitrogen also at 77 K have exactly the same patterns, suggesting the characteristics of this carbon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cross-entropy (CE) method is a new generic approach to combinatorial and multi-extremal optimization and rare event simulation. The purpose of this tutorial is to give a gentle introduction to the CE method. We present the CE methodology, the basic algorithm and its modifications, and discuss applications in combinatorial optimization and machine learning. combinatorial optimization