839 resultados para Least Square Adjustment
Resumo:
On the time scale of a century, the Atlantic thermohaline circulation (THC) is sensitive to the global surface salinity distribution. The advection of salinity toward the deep convection sites of the North Atlantic is one of the driving mechanisms for the THC. There is both a northward and a southward contributions. The northward salinity advection (Nsa) is related to the evaporation in the subtropics, and contributes to increased salinity in the convection sites. The southward salinity advection (Ssa) is related to the Arctic freshwater forcing and tends on the contrary to diminish salinity in the convection sites. The THC changes results from a delicate balance between these opposing mechanisms. In this study we evaluate these two effects using the IPSL-CM4 ocean-atmosphere-sea-ice coupled model (used for IPCC AR4). Perturbation experiments have been integrated for 100 years under modern insolation and trace gases. River runoff and evaporation minus precipitation are successively set to zero for the ocean during the coupling procedure. This allows the effect of processes Nsa and Ssa to be estimated with their specific time scales. It is shown that the convection sites in the North Atlantic exhibit various sensitivities to these processes. The Labrador Sea exhibits a dominant sensitivity to local forcing and Ssa with a typical time scale of 10 years, whereas the Irminger Sea is mostly sensitive to Nsa with a 15 year time scale. The GIN Seas respond to both effects with a time scale of 10 years for Ssa and 20 years for Nsa. It is concluded that, in the IPSL-CM4, the global freshwater forcing damps the THC on centennial time scales.
Resumo:
Six parameters uniquely describe the orbit of a body about the Sun. Given these parameters, it is possible to make predictions of the body's position by solving its equation of motion. The parameters cannot be directly measured, so they must be inferred indirectly by an inversion method which uses measurements of other quantities in combination with the equation of motion. Inverse techniques are valuable tools in many applications where only noisy, incomplete, and indirect observations are available for estimating parameter values. The methodology of the approach is introduced and the Kepler problem is used as a real-world example. (C) 2003 American Association of Physics Teachers.
Resumo:
High resolution vibration-rotation spectra of 13C2H2 were recorded in a number of regions from 2000 to 5200 cm−1 at Doppler or pressure limited resolution. In these spectral ranges cold and hot bands involving the bending-stretching combination levels have been analyzed up to high J values. Anharmonic quartic resonances for the combination levels ν1 + mν4 + nν5, ν2 + mν4 + (n + 2) ν5 and ν3 + (m − 1) ν4 + (n + 1) ν5 have been studied, and the l-type resonances within each polyad have been explicitly taken into account in the analysis of the data. The least-squares refinement provides deperturbed values for band origins and rotational constants, obtained by fitting rotation lines only up to J ≈ 20 with root mean square errors of ≈ 0.0003 cm−1. The band origins allowed us to determine a number of the anharmonicity constants xij0.
Resumo:
The theory of harmonic force constant refinement calculations is reviewed, and a general-purpose program for force constant and normal coordinate calculations is described. The program, called ASYM20. is available through Quantum Chemistry Program Exchange. It will work on molecules of any symmetry containing up to 20 atoms and will produce results on a series of isotopomers as desired. The vibrational secular equations are solved in either nonredundant valence internal coordinates or symmetry coordinates. As well as calculating the (harmonic) vibrational wavenumbers and normal coordinates, the program will calculate centrifugal distortion constants, Coriolis zeta constants, harmonic contributions to the α′s. root-mean-square amplitudes of vibration, and other quantities related to gas electron-diffraction studies and thermodynamic properties. The program will work in either a predict mode, in which it calculates results from an input force field, or in a refine mode, in which it refines an input force field by least squares to fit observed data on the quantities mentioned above. Predicate values of the force constants may be included in the data set for a least-squares refinement. The program is written in FORTRAN for use on a PC or a mainframe computer. Operation is mainly controlled by steering indices in the input data file, but some interactive control is also implemented.
Resumo:
Parameters to be determined in a least squares refinement calculation to fit a set of observed data may sometimes usefully be `predicated' to values obtained from some independent source, such as a theoretical calculation. An algorithm for achieving this in a least squares refinement calculation is described, which leaves the operator in full control of the weight that he may wish to attach to the predicate values of the parameters.
Resumo:
In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.
Resumo:
Advancing maize crop maturity is associated with changes in ear-to-stover ratio which may have consequences for the digestibility of the ensiled crop. The apparent digestibility and nitrogen retention of three diets (Early, Mid and Late) containing maize silages made from maize of advancing harvest date [dry matter (DM) contents of the maize silages were 273, 314 and 367 g kg(-1) for the silages in the Early, Mid and Late diets respectively], together with a protein supplement offered in sufficient quantities to make the diets isonitrogenous, were measured in six Holstein-Friesian steers in an incomplete Latin square design with four periods. Dry-matter intake of maize silage tended to be least for the Early diet and greatest for the Medium diet (P=0(.)182). Apparent digestibility of DM and organic matter did not differ between diets. Apparent digestibility of energy was lowest in the Late diet (P = 0(.)057) and the metabolizable energy concentrations of the three silages were calculated as 11(.)0, 11(.)1 and 10(.)6 MJ kg(-1) DM for the Early, Medium and Late diets respectively (P = 0(.)068). No differences were detected between diets in starch digestibility but the number of undamaged grains present in the faeces of animals fed the Late diet was significantly higher than with the Early and Mid diets (P = 0(.)006). The apparent digestibility of neutral-detergent fibre of the diets reduced significantly as silage DM content increased (P = 0(.)012) with a similar trend for the apparent digestibility of acid-detergent fibre (P = 0(.)078). Apparent digestibility of nitrogen (N) was similar for the Early and Mid diets, both being greater than the Late diet (P = 0(.)035). Nitrogen retention did not differ between diets. It was concluded that delaying harvest until the DM content is above 300 g kg(-1) can negatively affect the nutritive value of maize silage in the UK.
Resumo:
Grass-based diets are of increasing social-economic importance in dairy cattle farming, but their low supply of glucogenic nutrients may limit the production of milk. Current evaluation systems that assess the energy supply and requirements are based on metabolisable energy (ME) or net energy (NE). These systems do not consider the characteristics of the energy delivering nutrients. In contrast, mechanistic models take into account the site of digestion, the type of nutrient absorbed and the type of nutrient required for production of milk constituents, and may therefore give a better prediction of supply and requirement of nutrients. The objective of the present study is to compare the ability of three energy evaluation systems, viz. the Dutch NE system, the agricultural and food research council (AFRC) ME system, and the feed into milk (FIM) ME system, and of a mechanistic model based on Dijkstra et al. [Simulation of digestion in cattle fed sugar cane: prediction of nutrient supply for milk production with locally available supplements. J. Agric. Sci., Cambridge 127, 247-60] and Mills et al. [A mechanistic model of whole-tract digestion and methanogenesis in the lactating dairy cow: model development, evaluation and application. J. Anim. Sci. 79, 1584-97] to predict the feed value of grass-based diets for milk production. The dataset for evaluation consists of 41 treatments of grass-based diets (at least 0.75 g ryegrass/g diet on DM basis). For each model, the predicted energy or nutrient supply, based on observed intake, was compared with predicted requirement based on observed performance. Assessment of the error of energy or nutrient supply relative to requirement is made by calculation of mean square prediction error (MSPE) and by concordance correlation coefficient (CCC). All energy evaluation systems predicted energy requirement to be lower (6-11%) than energy supply. The root MSPE (expressed as a proportion of the supply) was lowest for the mechanistic model (0.061), followed by the Dutch NE system (0.082), FIM ME system (0.097) and AFRCME system(0.118). For the energy evaluation systems, the error due to overall bias of prediction dominated the MSPE, whereas for the mechanistic model, proportionally 0.76 of MSPE was due to random variation. CCC analysis confirmed the higher accuracy and precision of the mechanistic model compared with energy evaluation systems. The error of prediction was positively related to grass protein content for the Dutch NE system, and was also positively related to grass DMI level for all models. In conclusion, current energy evaluation systems overestimate energy supply relative to energy requirement on grass-based diets for dairy cattle. The mechanistic model predicted glucogenic nutrients to limit performance of dairy cattle on grass-based diets, and proved to be more accurate and precise than the energy systems. The mechanistic model could be improved by allowing glucose maintenance and utilization requirements parameters to be variable. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
When formulating least-cost poultry diets, ME concentration should be optimised by an iterative procedure, not entered as a fixed value. This iteration must calculate profit margins by taking into account the way in which feed intake and saleable outputs vary with ME concentration. In the case of broilers, adjustment of critical amino acid contents in direct proportion to ME concentration does not result in birds of equal fatness. To avoid an increase in fat deposition at higher energy levels, it is proposed that amino acid specifications should be adjusted in proportion to changes in the net energy supplied by the feed. A model is available which will both interpret responses to amino acids in laying trials and give economically optimal estimates of amino acid inputs for practical feed formulation. Flocks coming into lay and flocks nearing the end of the pullet year have bimodal distributions of rates of lay, with the result that calculations of requirement based on mean output will underestimate the optimal amino acid input for the flock. Chick diets containing surplus protein can lead to impaired utilisation of the first-limiting amino acid. This difficulty can be avoided by stating amino acid requirements as a proportion of the protein.
Resumo:
The member countries of the World Health Organization have endorsed its Global Strategy on Diet, Physical Activity, and Health. We assess the potential consumption impacts of these norms in the United States, France, and the United Kingdom using a mathematical programming approach. We find that adherence would involve large reductions in the consumption of fats and oils accompanying large rises in the consumption of fruits, vegetables, and cereal. Further, in the United Kingdom and the United States, but not France, sugar intakes would have to shrink considerably. Focusing on sub-populations within each country, we find that the least educated, not necessarily the poorest, would have to bear the highest burden of adjustment.
Resumo:
Promotion of adherence to healthy-eating norms has become an important element of nutrition policy in the United States and other developed countries. We assess the potential consumption impacts of adherence to a set of recommended dietary norms in the United States using a mathematical programming approach. We find that adherence to recommended dietary norms would involve significant changes in diets, with large reductions in the consumption of fats and oils along with large increases in the consumption of fruits, vegetables, and cereals. Compliance with norms recommended by the World Health Organization for energy derived from sugar would involve sharp reductions in sugar intakes. We also analyze how dietary adjustments required vary across demographic groups. Most socio-demographic characteristics appear to have relatively little influence on the pattern of adjustment required to comply with norms, Income levels have little effect on required dietary adjustments. Education is the only characteristic to have a significant influence on the magnitude of adjustments required. The least educated rather than the poorest have to bear the highest burden of adjustment. Out- analysis suggests that fiscal measures like nutrient-based taxes may not be as regressive as commonly believed. Dissemination of healthy-eating norms to the less educated will be a key challenge for nutrition policy.