963 resultados para polynomial yield function


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this article is to draw attention to calculations on the environmental effects of agriculture and to the definition of marginal agricultural yield. When calculating the environmental impacts of agricultural activities, the real environmental load generated by agriculture is not revealed properly through ecological footprint indicators, as the type of agricultural farming (thus the nature of the pollution it creates) is not incorporated in the calculation. It is commonly known that extensive farming uses relatively small amounts of labor and capital. It produces a lower yield per unit of land and thus requires more land than intensive farming practices to produce similar yields, so it has a larger crop and grazing footprint. However, intensive farms, to achieve higher yields, apply fertilizers, insecticides, herbicides, etc., and cultivation and harvesting are often mechanized. In this study, the focus is on highlighting the differences in the environmental impacts of extensive and intensive farming practices through a statistical analysis of the factors determining agricultural yield. A marginal function is constructed for the relation between chemical fertilizer use and yield per unit fertilizer input. Furthermore, a proposal is presented for how calculation of the yield factor could possibly be improved. The yield factor used in the calculation of biocapacity is not the marginal yield for a given area, but is calculated from the real and actual yields, and this way biocapacity and the ecological footprint for cropland are equivalent. Calculations for cropland biocapacity do not show the area needed for sustainable production, but rather the actual land area used for agricultural production. The proposal the authors present is a modification of the yield factor and also the changed biocapacity is calculated. The results of statistical analyses reveal the need for a clarification of the methodology for calculating marginal yield, which could clearly contribute to assessing the real environmental impacts of agriculture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this article is to draw attention to calculations on the environmental effects of agriculture and to the definition of marginal agricultural yield. When calculating the environmental impacts of agricultural activities, the real environmental load generated by agriculture is not revealed properly through ecological footprint indicators, as the type of agricultural farming (thus the nature of the pollution it creates) is not incorporated in the calculation. It is commonly known that extensive farming uses relatively small amounts of labor and capital. It produces a lower yield per unit of land and thus requires more land than intensive farming practices to produce similar yields, so it has a larger crop and grazing footprint. However, intensive farms, to achieve higher yields, apply fertilizers, insecticides, herbicides, etc., and cultivation and harvesting are often mechanized. In this study, the focus is on highlighting the differences in the environmental impacts of extensive and intensive farming practices through a statistical analysis of the factors determining agricultural yield. A marginal function is constructed for the relation between chemical fertilizer use and yield per unit fertilizer input. Furthermore, a proposal is presented for how calculation of the yield factor could possibly be improved. The yield factor used in the calculation of biocapacity is not the marginal yield for a given area, but is calculated from the real and actual yields, and this way biocapacity and the ecological footprint for cropland are equivalent. Calculations for cropland biocapacity do not show the area needed for sustainable production, but rather the actual land area used for agricultural production. The proposal the authors present is a modification of the yield factor and also the changed biocapacity is calculated. The results of statistical analyses reveal the need for a clarification of the methodology for calculating marginal yield, which could clearly contribute to assessing the real environmental impacts of agriculture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polynomial phase modulated (PPM) signals have been shown to provide improved error rate performance with respect to conventional modulation formats under additive white Gaussian noise and fading channels in single-input single-output (SISO) communication systems. In this dissertation, systems with two and four transmit antennas using PPM signals were presented. In both cases we employed full-rate space-time block codes in order to take advantage of the multipath channel. For two transmit antennas, we used the orthogonal space-time block code (OSTBC) proposed by Alamouti and performed symbol-wise decoding by estimating the phase coefficients of the PPM signal using three different methods: maximum-likelihood (ML), sub-optimal ML (S-ML) and the high-order ambiguity function (HAF). In the case of four transmit antennas, we used the full-rate quasi-OSTBC (QOSTBC) proposed by Jafarkhani. However, in order to ensure the best error rate performance, PPM signals were selected such as to maximize the QOSTBC’s minimum coding gain distance (CGD). Since this method does not always provide a unique solution, an additional criterion known as maximum channel interference coefficient (CIC) was proposed. Through Monte Carlo simulations it was shown that by using QOSTBCs along with the properly selected PPM constellations based on the CGD and CIC criteria, full diversity in flat fading channels and thus, low BER at high signal-to-noise ratios (SNR) can be ensured. Lastly, the performance of symbol-wise decoding for QOSTBCs was evaluated. In this case a quasi zero-forcing method was used to decouple the received signal and it was shown that although this technique reduces the decoding complexity of the system, there is a penalty to be paid in terms of error rate performance at high SNRs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian adaptive methods have been extensively used in psychophysics to estimate the point at which performance on a task attains arbitrary percentage levels, although the statistical properties of these estimators have never been assessed. We used simulation techniques to determine the small-sample properties of Bayesian estimators of arbitrary performance points, specifically addressing the issues of bias and precision as a function of the target percentage level. The study covered three major types of psychophysical task (yes-no detection, 2AFC discrimination and 2AFC detection) and explored the entire range of target performance levels allowed for by each task. Other factors included in the study were the form and parameters of the actual psychometric function Psi, the form and parameters of the model function M assumed in the Bayesian method, and the location of Psi within the parameter space. Our results indicate that Bayesian adaptive methods render unbiased estimators of any arbitrary point on psi only when M=Psi, and otherwise they yield bias whose magnitude can be considerable as the target level moves away from the midpoint of the range of Psi. The standard error of the estimator also increases as the target level approaches extreme values whether or not M=Psi. Contrary to widespread belief, neither the performance level at which bias is null nor that at which standard error is minimal can be predicted by the sweat factor. A closed-form expression nevertheless gives a reasonable fit to data describing the dependence of standard error on number of trials and target level, which allows determination of the number of trials that must be administered to obtain estimates with prescribed precision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An accurate knowledge of the fluorescence yield and its dependence on atmospheric properties such as pressure, temperature or humidity is essential to obtain a reliable measurement of the primary energy of cosmic rays in experiments using the fluorescence technique. In this work, several sets of fluorescence yield data (i.e. absolute value and quenching parameters) are described and compared. A simple procedure to study the effect of the assumed fluorescence yield on the reconstructed shower parameters (energy and shower maximum depth) as a function of the primary features has been developed. As an application, the effect of water vapor and temperature dependence of the collisional cross section on the fluorescence yield and its impact on the reconstruction of primary energy and shower maximum depth has been studied. Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d’ordonnancement de grande envergure. L’ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d’un ordonnancement. Résoudre un problème d’ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l’exécuter. La plupart des problèmes d’ordonnancement sont NP-Difficiles. Conséquemment, il n’existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d’ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d’explorer ces algorithmes d’ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l’arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d’ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d’optimisation n’est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d’exécuter une tâche au temps t dépend de t.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many rainfed wheat production systems are reliant on stored soil water for some or all of their water inputs. Selection and breeding for root traits could result in a yield benefit; however, breeding for root traits has traditionally been avoided due to the difficulty of phenotyping mature root systems, limited understanding of root system development and function, and the strong influence of environmental conditions on the phenotype of the mature root system. This paper outlines an international field selection program for beneficial root traits at maturity using soil coring in India and Australia. In the rainfed areas of India, wheat is sown at the end of the monsoon into hot soils with a quickly receding soil water profile; in season water inputs are minimal. We hypothesised that wheat selected and bred for high yield under these conditions would have deep, vigorous root systems, allowing them to access and utilise the stored soil water at depth around anthesis and grain-filling when surface layers were dry. The Indian trials resulted in 49 lines being sent to Australia for phenotyping. These lines were ranked against 41 high yielding Australian lines. Variation was observed for deep root traits e.g. in eastern Australia in 2012, maximum depth ranged from 118.8 to 146.3 cm. There was significant variation for root traits between sites and years, however, several Indian genotypes were identified that consistently ranked highly across sites and years for deep rooting traits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fleck and Johnson (Int. J. Mech. Sci. 29 (1987) 507) and Fleck et al. (Proc. Inst. Mech. Eng. 206 (1992) 119) have developed foil rolling models which allow for large deformations in the roll profile, including the possibility that the rolls flatten completely. However, these models require computationally expensive iterative solution techniques. A new approach to the approximate solution of the Fleck et al. (1992) Influence Function Model has been developed using both analytic and approximation techniques. The numerical difficulties arising from solving an integral equation in the flattened region have been reduced by applying an Inverse Hilbert Transform to get an analytic expression for the pressure. The method described in this paper is applicable to cases where there is or there is not a flat region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method for estimating the time to colonization of Methicillin-resistant Staphylococcus Aureus (MRSA) patients is developed in this paper. The time to colonization of MRSA is modelled using a Bayesian smoothing approach for the hazard function. There are two prior models discussed in this paper: the first difference prior and the second difference prior. The second difference prior model gives smoother estimates of the hazard functions and, when applied to data from an intensive care unit (ICU), clearly shows increasing hazard up to day 13, then a decreasing hazard. The results clearly demonstrate that the hazard is not constant and provide a useful quantification of the effect of length of stay on the risk of MRSA colonization which provides useful insight.