931 resultados para ESTIMATING EQUATIONS METHOD
Resumo:
An algorithm for suppressing the chaotic oscillations in non-linear dynamical systems with singular Jacobian matrices is developed using a linear feedback control law based upon the Lyapunov-Krasovskii (LK) method. It appears that the LK method can serve effectively as a generalised method for the suppression of chaotic oscillations for a wide range of systems. Based on this method, the resulting conditions for undisturbed motions to be locally or globally stable are sufficient and conservative. The generalized Lorenz system and disturbed gyrostat equations are exemplified for the validation of the proposed feedback control rule. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Recently the Balanced method was introduced as a class of quasi-implicit methods for solving stiff stochastic differential equations. We examine asymptotic and mean-square stability for several implementations of the Balanced method and give a generalized result for the mean-square stability region of any Balanced method. We also investigate the optimal implementation of the Balanced method with respect to strong convergence.
Resumo:
Most magnetic resonance imaging (MRI) spatial encoding techniques employ low-frequency pulsed magnetic field gradients that undesirably induce multiexponentially decaying eddy currents in nearby conducting structures of the MRI system. The eddy currents degrade the switching performance of the gradient system, distort the MRI image, and introduce thermal loads in the cryostat vessel and superconducting MRI components. Heating of superconducting magnets due to induced eddy currents is particularly problematic as it offsets the superconducting operating point, which can cause a system quench. A numerical characterization of transient eddy current effects is vital for their compensation/control and further advancement of the MRI technology as a whole. However, transient eddy current calculations are particularly computationally intensive. In large-scale problems, such as gradient switching in MRI, conventional finite-element method (FEM)-based routines impose very large computational loads during generation/solving of the system equations. Therefore, other computational alternatives need to be explored. This paper outlines a three-dimensional finite-difference time-domain (FDTD) method in cylindrical coordinates for the modeling of low-frequency transient eddy currents in MRI, as an extension to the recently proposed time-harmonic scheme. The weakly coupled Maxwell's equations are adapted to the low-frequency regime by downscaling the speed of light constant, which permits the use of larger FDTD time steps while maintaining the validity of the Courant-Friedrich-Levy stability condition. The principal hypothesis of this work is that the modified FDTD routine can be employed to analyze pulsed-gradient-induced, transient eddy currents in superconducting MRI system models. The hypothesis is supported through a verification of the numerical scheme on a canonical problem and by analyzing undesired temporal eddy current effects such as the B-0-shift caused by actively shielded symmetric/asymmetric transverse x-gradient head and unshielded z-gradient whole-body coils operating in proximity to a superconducting MRI magnet.
Resumo:
Forty-four soils from under native vegetation and a range of management practices following clearing were analysed for ‘labile’ organic carbon (OC) using both the particulate organic carbon (POC) and the 333 mm KmnO4 (MnoxC) methods. Although there was some correlation between the 2 methods, the POC method was more sensitive by about a factor of 2 to rapid loss in OC as a result of management or land-use change. Unlike the POC method, the MnoxC method was insensitive to rapid gains in TOC following establishment of pasture on degraded soil. The MnoxC method was shown to be particularly sensitive to the presence of lignin or lignin-like compounds and therefore is likely to be very sensitive to the nature of the vegetation present at or near the time of sampling and explains the insensitivity of this method to OC gain under pasture. The presence of charcoal is an issue with both techniques, but whereas the charcoal contribution to the POC fraction can be assessed, the MnoxC method cannot distinguish between charcoal and most biomolecules found in soil. Because of these limitations, the MnoxC method should not be applied indiscriminately across different soil types and management practices.
Resumo:
Many different methods of reporting animal diets have been used in ecological research. These vary greatly in level of accuracy and precision and therefore complicate attempts to measure and compare diets, and quantitites of nutrients in those diets, across a wide range of taxa. For most birds, the carotenoid content of the diet has not been directly measured. Here, therefore, I use an avian example to show how different methods of measuring the quantities of various foods in the diet affect the relative rankings of higher taxa (families, subfamilies, and tribes), and species within these taxa, with regard to the carotenoid contents of their diets. This is a timely example, as much recent avian literature has focused on the way dietary carotenoids may be traded off among aspects of survival, fitness and signalling. I assessed the mean dietary carotenoid contents of representatives of thirty higher taxa of birds using four different carotenoid intake indices varying in precision, including trophic levels, a coarse-scale and a fine-scale categorical index, and quantitative estimates of dietary carotenoids. This last method was used as the benchmark. For comparisons among taxa, all but the trophic level index were significantly correlated with each other. However, for comparisons of species within taxa, the fine-scale index outperformed the coarse-scale index, which in turn outperformed the trophic level index. In addition, each method has advantages and disadvantages, as well as underlying assumptions that must be considered. Examination and comparison of several possible methods of diet assessment appears to highlight these so that the best possible index is used given available data, and it is recommended that such a step be taken prior to the inclusion of estimated nutrient intake in any statistical analysis. Although applied to avian carotenoids here, this method could readily be applied to other taxa and types of nutrients.
Resumo:
Purpose: This Study evaluated the predictive validity of three previously published ActiGraph energy expenditure (EE) prediction equations developed for children and adolescents. Methods: A total of 45 healthy children and adolescents (mean age: 13.7 +/- 2.6 yr) completed four 5-min activity trials (normal walking. brisk walking, easy running, and fast running) in ail indoor exercise facility. During each trial, participants were all ActiGraph accelerometer oil the right hip. EE was monitored breath by breath using the Cosmed K4b(2) portable indirect calorimetry system. Differences and associations between measured and predicted EE were assessed using dependent t-tests and Pearson correlations, respectively. Classification accuracy was assessed using percent agreement, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve, Results: None of the equations accurately predicted mean energy expenditure during each of the four activity trials. Each equation, however, accurately predicted mean EE in at least one activity trial. The Puyau equation accurately predicted EE during slow walking. The Trost equation accurately predicted EE during slow running. The Freedson equation accurately predicted EE during fast running. None of the three equations accurately predicted EE during brisk walking. The equations exhibited fair to excellent classification accuracy with respect to activity intensity. with the Trost equation exhibiting the highest classification accuracy and the Puyau equation exhibiting the lowest. Conclusions: These data suggest that the three accelerometer prediction equations do not accurately predict EE on a minute-by-minute basis in children and adolescents during overground walking and running. The equations maybe, however, for estimating participation in moderate and vigorous activity.
Resumo:
Predatory insects and spiders are key elements of integrated pest management (IPM) programmes in agricultural crops such as cotton. Management decisions in IPM programmes should to be based on a reliable and efficient method for counting both predators and pests. Knowledge of the temporal constraints that influence sampling is required because arthropod abundance estimates are likely to vary over a growing season and within a day. Few studies have adequately quantified this effect using the beat sheet, a potentially important sampling method. We compared the commonly used methods of suction and visual sampling to the beat sheet, with reference to an absolute cage clamp method for determining the abundance of various arthropod taxa over 5 weeks. There were significantly more entomophagous arthropods recorded using the beat sheet and cage clamp methods than by using suction or visual sampling, and these differences were more pronounced as the plants grew. In a second trial, relative estimates of entomophagous and phytophagous arthropod abundance were made using beat sheet samples collected over a day. Beat sheet estimates of the abundance of only eight of the 43 taxa examined were found to vary significantly over a day. Beat sheet sampling is recommended in further studies of arthropod abundance in cotton, but researchers and pest management advisors should bear in mind the time of season and time of day effects.
Resumo:
We apply the truncated Wigner method to the process of three-body recombination in ultracold Bose gases. We find that within the validity regime of the Wigner truncation for two-body scattering, three-body recombination can be treated using a set of coupled stochastic differential equations that include diffusion terms, and can be simulated using known numerical methods. As an example we investigate the behavior of a simple homogeneous Bose gas, finding a very slight increase of the loss rate compared to that obtained by using the standard method.
Resumo:
We thank Hilberts and Troch [2006] for their comment on our paper [Cartwright et al, 2005]. Before proceeding with our specific replies to the comments we would first like to clarify the definitions and meanings of equations (1)-(3) as presented by Hilberts and Troch [2006]. First, equation (1) is the fundamental definition of the (complex) effective porosity as derived by Nielsen and Perrochet [2000]. Equations (2) and (3), however, represent the linear frequency response function of the water table in the sand column responding to simple harmonic forcing. This function, which was validated by Nielsen and Perrochet [2000], provides an alternative method for estimating the complex effective porosity from the experimental sand column data in the absence of direct measurements of h_(tot) (which are required if equation (1) is to be used).
Resumo:
Finite element analysis (FEA) of nonlinear problems in solid mechanics is a time consuming process, but it can deal rigorously with the problems of both geometric, contact and material nonlinearity that occur in roll forming. The simulation time limits the application of nonlinear FEA to these problems in industrial practice, so that most applications of nonlinear FEA are in theoretical studies and engineering consulting or troubleshooting. Instead, quick methods based on a global assumption of the deformed shape have been used by the roll-forming industry. These approaches are of limited accuracy. This paper proposes a new form-finding method - a relaxation method to solve the nonlinear problem of predicting the deformed shape due to plastic deformation in roll forming. This method involves applying a small perturbation to each discrete node in order to update the local displacement field, while minimizing plastic work. This is iteratively applied to update the positions of all nodes. As the method assumes a local displacement field, the strain and stress components at each node are calculated explicitly. Continued perturbation of nodes leads to optimisation of the displacement field. Another important feature of this paper is a new approach to consideration of strain history. For a stable and continuous process such as rolling and roll forming, the strain history of a point is represented spatially by the states at a row of nodes leading in the direction of rolling to the current one. Therefore the increment of the strain components and the work-increment of a point can be found without moving the object forward. Using this method we can find the solution for rolling or roll forming in just one step. This method is expected to be faster than commercial finite element packages by eliminating repeated solution of large sets of simultaneous equations and the need to update boundary conditions that represent the rolls.
Resumo:
In this paper we propose a fast adaptive Importance Sampling method for the efficient simulation of buffer overflow probabilities in queueing networks. The method comprises three stages. First we estimate the minimum Cross-Entropy tilting parameter for a small buffer level; next, we use this as a starting value for the estimation of the optimal tilting parameter for the actual (large) buffer level; finally, the tilting parameter just found is used to estimate the overflow probability of interest. We recognize three distinct properties of the method which together explain why the method works well; we conjecture that they hold for quite general queueing networks. Numerical results support this conjecture and demonstrate the high efficiency of the proposed algorithm.
Resumo:
This paper presents a general methodology for estimating and incorporating uncertainty in the controller and forward models for noisy nonlinear control problems. Conditional distribution modeling in a neural network context is used to estimate uncertainty around the prediction of neural network outputs. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localize the possible control solutions to consider. A nonlinear multivariable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non Gaussian distributions of control signal as well as processes with hysteresis.
Resumo:
Stochastic differential equations arise naturally in a range of contexts, from financial to environmental modeling. Current solution methods are limited in their representation of the posterior process in the presence of data. In this work, we present a novel Gaussian process approximation to the posterior measure over paths for a general class of stochastic differential equations in the presence of observations. The method is applied to two simple problems: the Ornstein-Uhlenbeck process, of which the exact solution is known and can be compared to, and the double-well system, for which standard approaches such as the ensemble Kalman smoother fail to provide a satisfactory result. Experiments show that our variational approximation is viable and that the results are very promising as the variational approximate solution outperforms standard Gaussian process regression for non-Gaussian Markov processes.
Resumo:
Electrical compound action potentials (ECAPs) of the cochlear nerve are used clinically for quick and efficient cochlear implant parameter setting. The ECAP is the aggregate response of nerve fibres at various distances from the recording electrode, and the magnitude of the ECAP is therefore related to the number of fibres excited by a particular stimulus. Current methods, such as the masker-probe or alternating polarity methods, use the ECAP magnitude at various stimulus levels to estimate the neural threshold, from which the parameters are calculated. However, the correlation between ECAP threshold and perceptual threshold is not always good, with ECAP threshold typically being much higher than perceptual threshold. The lower correlation is partly due to the very different pulse rates used for ECAPs (below 100 Hz) and clinical programs (hundreds of Hz up to several kHz). Here we introduce a new method of estimating ECAP threshold for cochlear implants based upon the variability of the response. At neural threshold, where some but not all fibers respond, there is a different response each trial. This inter-trial variability can be detected overlaying the constant variability of the system noise. The large stimulus artefact, which requires additional trials for artefact rejection in the standard ECAP magnitude methods, is not consequential, as it has little variability. The variability method therefore consists of simply presenting a pulse and recording the ECAP, and as such is quicker than other methods. It also has the potential to be run at high rates like clinical programs, potentially improving the correlation with behavioural threshold. Preliminary data is presented that shows a detectable variability increase shortly after probe offset, at probe levels much lower than those producing a detectable ECAP magnitude. Care must be taken, however, to avoid saturation of the recording amplifier saturation; in our experiments we found a gain of 300 to be optimal.
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.