994 resultados para Form Error Compensation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the 1960s, Jacob Bjerknes suggested that if the top-of-the-atmosphere (TOA) fluxes and the oceanic heat storage did not vary too much, then the total energy transport by the climate system would not vary too much either. This implies that any large anomalies of oceanic and atmospheric energy transport should be equal and opposite. This simple scenario has become known as Bjerknes compensation. A long control run of the Third Hadley Centre Coupled Ocean-Atmosphere General Circulation Model (HadCM3) has been investigated. It was found that northern extratropical decadal anomalies of atmospheric and oceanic energy transports are significantly anticorrelated and have similar magnitudes, which is consistent with the predictions of Bjerknes compensation. ne degree of compensation in the northern extratropics was found to increase with increasing, time scale. Bjerknes compensation did not occur in the Tropics, primarily as large changes in the surface fluxes were associated with large changes in the TOA fluxes. In the ocean, the decadal variability of the energy transport is associated with fluctuations in the meridional overturning circulation in the Atlantic Ocean. A stronger Atlantic Ocean energy transport leads to strong warming of surface temperatures in the Greenland-Iceland-Norwegian (GIN) Seas. which results in a reduced equator-to-pole surface temperature gradient and reduced atmospheric baroclinicity. It is argued that a stronger Atlantic Ocean energy transport leads to a weakened atmospheric transient energy transport.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flow in the world's oceans occurs at a wide range of spatial scales, from a fraction of a metre up to many thousands of kilometers. In particular, regions of intense flow are often highly localised, for example, western boundary currents, equatorial jets, overflows and convective plumes. Conventional numerical ocean models generally use static meshes. The use of dynamically-adaptive meshes has many potential advantages but needs to be guided by an error measure reflecting the underlying physics. A method of defining an error measure to guide an adaptive meshing algorithm for unstructured tetrahedral finite elements, utilizing an adjoint or goal-based method, is described here. This method is based upon a functional, encompassing important features of the flow structure. The sensitivity of this functional, with respect to the solution variables, is used as the basis from which an error measure is derived. This error measure acts to predict those areas of the domain where resolution should be changed. A barotropic wind driven gyre problem is used to demonstrate the capabilities of the method. The overall objective of this work is to develop robust error measures for use in an oceanographic context which will ensure areas of fine mesh resolution are used only where and when they are required. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Constant-α force-free magnetic flux rope models have proven to be a valuable first step toward understanding the global context of in situ observations of magnetic clouds. However, cylindrical symmetry is necessarily assumed when using such models, and it is apparent from both observations and modeling that magnetic clouds have highly noncircular cross sections. A number of approaches have been adopted to relax the circular cross section approximation: frequently, the cross-sectional shape is allowed to take an arbitrarily chosen shape (usually elliptical), increasing the number of free parameters that are fit between data and model. While a better “fit” may be achieved in terms of reducing the mean square error between the model and observed magnetic field time series, it is not always clear that this translates to a more accurate reconstruction of the global structure of the magnetic cloud. We develop a new, noncircular cross section flux rope model that is constrained by observations of CMEs/ICMEs and knowledge of the physical processes acting on the magnetic cloud: The magnetic cloud is assumed to initially take the form of a force-free flux rope in the low corona but to be subsequently deformed by a combination of axis-centered self-expansion and heliocentric radial expansion. The resulting analytical solution is validated by fitting to artificial time series produced by numerical MHD simulations of magnetic clouds and shown to accurately reproduce the global structure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Following an introduction to the diagonalization of matrices, one of the more difficult topics for students to grasp in linear algebra is the concept of Jordan normal form. In this note, we show how the important notions of diagonalization and Jordan normal form can be introduced and developed through the use of the computer algebra package Maple®.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper considers meta-analysis of diagnostic studies that use a continuous score for classification of study participants into healthy or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might be confounded by a potentially unknown variation of the cut-off value. To cope with this phenomena it is suggested to use, instead, an overall estimate of the misclassification error previously suggested and used as Youden’s index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel–Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden’s index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent studies into price transmission have recognized the important role played by transport and transaction costs. Threshold models are one approach to accommodate such costs. We develop a generalized Threshold Error Correction Model to test for the presence and form of threshold behavior in price transmission that is symmetric around equilibrium. We use monthly wheat, maize, and soya prices from the United States, Argentina, and Brazil to demonstrate this model. Classical estimation of these generalized models can present challenges but Bayesian techniques avoid many of these problems. Evidence for thresholds is found in three of the five commodity price pairs investigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nonlinear adjustment toward long-run price equilibrium relationships in the sugar-ethanol-oil nexus in Brazil is examined. We develop generalized bivariate error correction models that allow for cointegration between sugar, ethanol, and oil prices, where dynamic adjustments are potentially nonlinear functions of the disequilibrium errors. A range of models are estimated using Bayesian Monte Carlo Markov Chain algorithms and compared using Bayesian model selection methods. The results suggest that the long-run drivers of Brazilian sugar prices are oil prices and that there are nonlinearities in the adjustment processes of sugar and ethanol prices to oil price but linear adjustment between ethanol and sugar prices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since estimated dietary selenium intake in the UK has declined steadily from around 60 mug day(-1) in 1975 to 34 mug day(-1) in 1997, there is a need to increase selenium intake from staple foods such as milk and milk products. An experiment was therefore done to investigate the relationship between dietary source and concentration of selenium and the selenium content of bovine milk. In a 3 x 3 factorial design, 90 mid-lactation Holstein dairy cows were supplemented over 8 weeks with either sodium selenite (S), a chelated selenium product (Selenium Metasolate(TM)) (C) or a selenium yeast (Sel-plex(TM)) (Y) at three different dietary inclusion levels of 0.38 (L), 0.76 (M) and 1.14 (H) mg kg(-1) dry matter (DM). Significant increases in milk selenium concentration were observed for all three sources with increasing inclusion level in the diet, but Y gave a much greater response (up to +65 mug l(-1)) than the other two sources of selenium (S and C up to +4 and +6 mug l(-1) respectively). The Y source also resulted in a substantially higher apparent efficiency of transfer of selenium from diet to milk than S or C. Feeding Y at the lowest dietary concentration, and thus within the maximum level permitted under EU regulations, resulted in milk with a selenium concentration of 28 mug l(-1). If the selenium concentration of milk in the UK was increased to this value, it would, at current consumption rates, provide an extra 8.7 mug selenium day(-1), or 11 and 14% of daily recommended national intake for men and women respectively. (C) 2004 Society of Chemical Industry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous attempts to apply statistical models, which correlate nutrient intake with methane production, have been of limited. value where predictions are obtained for nutrient intakes and diet types outside those. used in model construction. Dynamic mechanistic models have proved more suitable for extrapolation, but they remain computationally expensive and are not applied easily in practical situations. The first objective of this research focused on employing conventional techniques to generate statistical models of methane production appropriate to United Kingdom dairy systems. The second objective was to evaluate these models and a model published previously using both United Kingdom and North American data sets. Thirdly, nonlinear models were considered as alternatives to the conventional linear regressions. The United Kingdom calorimetry data used to construct the linear models also were used to develop the three. nonlinear alternatives that were ball of modified Mitscherlich (monomolecular) form. Of the linear models tested,, an equation from the literature proved most reliable across the full range of evaluation data (root mean square prediction error = 21.3%). However, the Mitscherlich models demonstrated the greatest degree of adaptability across diet types and intake level. The most successful model for simulating the independent data was a modified Mitscherlich equation with the steepness parameter set to represent dietary starch-to-ADF ratio (root mean square prediction error = 20.6%). However, when such data were unavailable, simpler Mitscherlich forms relating dry matter or metabolizable energy intake to methane production remained better alternatives relative to their linear counterparts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background noise should in theory hinder detection of auditory cues associated with approaching danger. We tested whether foraging chaffinches Fringilla coelebs responded to background noise by increasing vigilance, and examined whether this was explained by predation risk compensation or by a novel stimulus hypothesis. The former predicts that only inter-scan interval should be modified in the presence of background noise, not vigilance levels generally. This is because noise hampers auditory cue detection and increases perceived predation risk primarily when in the head-down position, and also because previous tests have shown that only interscan interval is correlated with predator detection ability in this system. Chaffinches only modified interscan interval supporting this hypothesis. At the same time they made significantly fewer pecks when feeding during the background noise treatment and so the increased vigilance led to a reduction in intake rate, suggesting that compensating for the increased predation risk could indirectly lead to a fitness cost. Finally, the novel stimulus hypothesis predicts that chaffinches should habituate to the noise, which did not occur within a trial or over 5 subsequent trials. We conclude that auditory cues may be an important component of the trade-off between vigilance and feeding, and discuss possible implications for anti-predation theory and ecological processes

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proportion estimators are quite frequently used in many application areas. The conventional proportion estimator (number of events divided by sample size) encounters a number of problems when the data are sparse as will be demonstrated in various settings. The problem of estimating its variance when sample sizes become small is rarely addressed in a satisfying framework. Specifically, we have in mind applications like the weighted risk difference in multicenter trials or stratifying risk ratio estimators (to adjust for potential confounders) in epidemiological studies. It is suggested to estimate p using the parametric family (see PDF for character) and p(1 - p) using (see PDF for character), where (see PDF for character). We investigate the estimation problem of choosing c 0 from various perspectives including minimizing the average mean squared error of (see PDF for character), average bias and average mean squared error of (see PDF for character). The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be independent of n and equals c = 1. The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be dependent of n with limiting value c = 0.833. This might justifiy to use a near-optimal value of c = 1 in practice which also turns out to be beneficial when constructing confidence intervals of the form (see PDF for character).