948 resultados para Bivariate Hermite polynomials
Resumo:
This study uses a Granger causality time series modeling approach to quantitatively diagnose the feedback of daily sea surface temperatures (SSTs) on daily values of the North Atlantic Oscillation (NAO) as simulated by a realistic coupled general circulation model (GCM). Bivariate vector autoregressive time series models are carefully fitted to daily wintertime SST and NAO time series produced by a 50-yr simulation of the Third Hadley Centre Coupled Ocean-Atmosphere GCM (HadCM3). The approach demonstrates that there is a small yet statistically significant feedback of SSTs oil the NAO. The SST tripole index is found to provide additional predictive information for the NAO than that available by using only past values of NAO-the SST tripole is Granger causal for the NAO. Careful examination of local SSTs reveals that much of this effect is due to the effect of SSTs in the region of the Gulf Steam, especially south of Cape Hatteras. The effect of SSTs on NAO is responsible for the slower-than-exponential decay in lag-autocorrelations of NAO notable at lags longer than 10 days. The persistence induced in daily NAO by SSTs causes long-term means of NAO to have more variance than expected from averaging NAO noise if there is no feedback of the ocean on the atmosphere. There are greater long-term trends in NAO than can be expected from aggregating just short-term atmospheric noise, and NAO is potentially predictable provided that future SSTs are known. For example, there is about 10%-30% more variance in seasonal wintertime means of NAO and almost 70% more variance in annual means of NAO due to SST effects than one would expect if NAO were a purely atmospheric process.
Resumo:
We report the results of variational calculations of the rovibrational energy levels of HCN for J = 0, 1 and 2, where we reproduce all the ca. 100 observed vibrational states for all observed isotopic species, with energies up to 18000 cm$^{-1}$, to about $\pm $1 cm$^{-1}$, and the corresponding rotational constants to about $\pm $0.001 cm$^{-1}$. We use a hamiltonian expressed in internal coordinates r$_{1}$, r$_{2}$ and $\theta $, using the exact expression for the kinetic energy operator T obtained by direct transformation from the cartesian representation. The potential energy V is expressed as a polynomial expansion in the Morse coordinates y$_{i}$ for the bond stretches and the interbond angle $\theta $. The basis functions are built as products of appropriately scaled Morse functions in the bond-stretches and Legendre or associated Legendre polynomials of cos $\theta $ in the angle bend, and we evaluate matrix elements by Gauss quadrature. The hamiltonian matripx is factorized using the full rovibrational symmetry, and the basis is contracted to an optimized form; the dimensions of the final hamiltonian matrix vary from 240 $\times $ 240 to 1000 $\times $ 1000.We believe that our calculation is converged to better than 1 cm$^{-1}$ at 18 000 cm$^{-1}$. Our potential surface is expressed in terms of 31 parameters, about half of which have been refined by least squares to optimize the fit to the experimental data. The advantages and disadvantages and the future potential of calculations of this type are discussed.
Resumo:
Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.
Resumo:
The resilience of family farming is an important feature of the structure of the farming industry in many countries, due largely to the 'smooth' succession of farms from one generation to the next. The stability of this structure is now threatened by the widening gap between the income expected from farming when compared with non-farming occupations in an economy like Ireland, operating at almost full employment. Nominated farm heirs are increasingly unlikely to choose full-time farming as their preferred occupation. To identify the factors that affect this occupational choice, a multinomial logit model is developed and applied to Irish data to examine the farm, economic and personal characteristics that influence a nominated heir's decision to enter farming as opposed to some non-farming occupation. The results show a significant negative relationship between higher education and the choice of full-time farming as an occupation. The interdependence between education and occupational choices is further explored using a bivariate probit model. The main findings are: the occupational choice and the decision to continue with higher education are made jointly; the nominated heirs on more profitable farms are less likely to pursue tertiary education and therefore more likely to enter full-time farming. The model developed is sufficiently general for studying the phenomenon of succession on farms.
Resumo:
This paper reviews state-of-art statistical designs for dose-escalation procedures in first-into-man studies. The main focus will be on studies in oncology, as most statistical procedures for phase I trials have been proposed in this context. Extensions to situations such as the observation of bivariate outcomes and healthy volunteer studies are also discussed. The number of dose levels and cohort sizes used in early phase trials are considered. Finally, this paper raises some practical issues for dose-escalation procedures.
Resumo:
In this paper, Bayesian decision procedures are developed for dose-escalation studies based on bivariate observations of undesirable events and signs of therapeutic benefit. The methods generalize earlier approaches taking into account only the undesirable outcomes. Logistic regression models are used to model the two responses, which are both assumed to take a binary form. A prior distribution for the unknown model parameters is suggested and an optional safety constraint can be included. Gain functions to be maximized are formulated in terms of accurate estimation of the limits of a therapeutic window or optimal treatment of the next cohort of subjects, although the approach could be applied to achieve any of a wide variety of objectives. The designs introduced are illustrated through simulation and retrospective implementation to a completed dose-escalation study. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
In survival analysis frailty is often used to model heterogeneity between individuals or correlation within clusters. Typically frailty is taken to be a continuous random effect, yielding a continuous mixture distribution for survival times. A Bayesian analysis of a correlated frailty model is discussed in the context of inverse Gaussian frailty. An MCMC approach is adopted and the deviance information criterion is used to compare models. As an illustration of the approach a bivariate data set of corneal graft survival times is analysed. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
Heterogeneity in lifetime data may be modelled by multiplying an individual's hazard by an unobserved frailty. We test for the presence of frailty of this kind in univariate and bivariate data with Weibull distributed lifetimes, using statistics based on the ordered Cox-Snell residuals from the null model of no frailty. The form of the statistics is suggested by outlier testing in the gamma distribution. We find through simulation that the sum of the k largest or k smallest order statistics, for suitably chosen k , provides a powerful test when the frailty distribution is assumed to be gamma or positive stable, respectively. We provide recommended values of k for sample sizes up to 100 and simple formulae for estimated critical values for tests at the 5% level.
Resumo:
An evaluation of milk urea nitrogen (MUN) as a diagnostic of protein feeding in dairy cows was performed using mean treatment data (n = 306) from 50 production trials conducted in Finland (n = 48) and Sweden (n = 2). Data were used to assess the effects of diet composition and certain animal characteristics on MUN and to derive relationships between MUN and the efficiency of N utilization for milk production and urinary N excretion. Relationships were developed using regression analysis based on either models of fixed factors or using mixed models that account for between-experiment variations. Dietary crude protein (CP) content was the best single predictor of MUN and accounted for proportionately 0.778 of total variance [ MUN (mg/dL) = -14.2 + 0.17 x dietary CP content (g/kg dry matter)]. The proportion of variation explained by this relationship increased to 0.952 when a mixed model including the random effects of study was used, but both the intercept and slope remained unchanged. Use of rumen degradable CP concentration in excess of predicted requirements, or the ratio of dietary CP to metabolizable energy as single predictors, did not explain more of the variation in MUN (R-2 = 0.767 or 0.778, respectively) than dietary CP content. Inclusion of other dietary factors with dietary CP content in bivariate models resulted in only marginally better predictions of MUN (R-2 = 0.785 to 0.804). Closer relationships existed between MUN and dietary factors when nutrients (CP to metabolizable energy) were expressed as concentrations in the diet, rather than absolute intakes. Furthermore, both MUN and MUN secretion (g/d) provided more accurate predictions of urinary N excretion (R-2 = 0.787 and 0.835, respectively) than measurements of the efficiency of N utilization for milk production (R-2 = 0.769). It is concluded that dietary CP content is the most important nutritional factor influencing MUN, and that measurements of MUN can be utilized as a diagnostic of protein feeding in the dairy cow and used to predict urinary N excretion.
Resumo:
In this paper we consider bilinear forms of matrix polynomials and show that these polynomials can be used to construct solutions for the problems of solving systems of linear algebraic equations, matrix inversion and finding extremal eigenvalues. An almost Optimal Monte Carlo (MAO) algorithm for computing bilinear forms of matrix polynomials is presented. Results for the computational costs of a balanced algorithm for computing the bilinear form of a matrix power is presented, i.e., an algorithm for which probability and systematic errors are of the same order, and this is compared with the computational cost for a corresponding deterministic method.
Resumo:
We consider scattering of a time harmonic incident plane wave by a convex polygon with piecewise constant impedance boundary conditions. Standard finite or boundary element methods require the number of degrees of freedom to grow at least linearly with respect to the frequency of the incident wave in order to maintain accuracy. Extending earlier work by Chandler-Wilde and Langdon for the sound soft problem, we propose a novel Galerkin boundary element method, with the approximation space consisting of the products of plane waves with piecewise polynomials supported on a graded mesh with smaller elements closer to the corners of the polygon. Theoretical analysis and numerical results suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy grows only logarithmically with respect to the frequency of the incident wave.
Resumo:
We consider the scattering of a time-harmonic acoustic incident plane wave by a sound soft convex curvilinear polygon with Lipschitz boundary. For standard boundary or finite element methods, with a piecewise polynomial approximation space, the number of degrees of freedom required to achieve a prescribed level of accuracy grows at least linearly with respect to the frequency of the incident wave. Here we propose a novel Galerkin boundary element method with a hybrid approximation space, consisting of the products of plane wave basis functions with piecewise polynomials supported on several overlapping meshes; a uniform mesh on illuminated sides, and graded meshes refined towards the corners of the polygon on illuminated and shadow sides. Numerical experiments suggest that the number of degrees of freedom required to achieve a prescribed level of accuracy need only grow logarithmically as the frequency of the incident wave increases.
Resumo:
A simple parameter adaptive controller design methodology is introduced in which steady-state servo tracking properties provide the major control objective. This is achieved without cancellation of process zeros and hence the underlying design can be applied to non-minimum phase systems. As with other self-tuning algorithms, the design (user specified) polynomials of the proposed algorithm define the performance capabilities of the resulting controller. However, with the appropriate definition of these polynomials, the synthesis technique can be shown to admit different adaptive control strategies, e.g. self-tuning PID and self-tuning pole-placement controllers. The algorithm can therefore be thought of as an embodiment of other self-tuning design techniques. The performances of some of the resulting controllers are illustrated using simulation examples and the on-line application to an experimental apparatus.
Resumo:
The problem of identification of a nonlinear dynamic system is considered. A two-layer neural network is used for the solution of the problem. Systems disturbed with unmeasurable noise are considered, although it is known that the disturbance is a random piecewise polynomial process. Absorption polynomials and nonquadratic loss functions are used to reduce the effect of this disturbance on the estimates of the optimal memory of the neural-network model.
Resumo:
This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.