973 resultados para polynomial yield function


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generalized Drucker–Prager (GD–P) viscoplastic yield surface model was developed and validated for asphalt concrete. The GD–P model was formulated based on fabric tensor modified stresses to consider the material inherent anisotropy. A smooth and convex octahedral yield surface function was developed in the GD–P model to characterize the full range of the internal friction angles from 0° to 90°. In contrast, the existing Extended Drucker–Prager (ED–P) was demonstrated to be applicable only for a material that has an internal friction angle less than 22°. Laboratory tests were performed to evaluate the anisotropic effect and to validate the GD–P model. Results indicated that (1) the yield stresses of an isotropic yield surface model are greater in compression and less in extension than that of an anisotropic model, which can result in an under-prediction of the viscoplastic deformation; and (2) the yield stresses predicted by the GD–P model matched well with the experimental results of the octahedral shear strength tests at different normal and confining stresses. By contrast, the ED–P model over-predicted the octahedral yield stresses, which can lead to an under-prediction of the permanent deformation. In summary, the rutting depth of an asphalt pavement would be underestimated without considering anisotropy and convexity of the yield surface for asphalt concrete. The proposed GD–P model was demonstrated to be capable of overcoming these limitations of the existing yield surface models for the asphalt concrete.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An iterative Monte Carlo algorithm for evaluating linear functionals of the solution of integral equations with polynomial non-linearity is proposed and studied. The method uses a simulation of branching stochastic processes. It is proved that the mathematical expectation of the introduced random variable is equal to a linear functional of the solution. The algorithm uses the so-called almost optimal density function. Numerical examples are considered. Parallel implementation of the algorithm is also realized using the package ATHAPASCAN as an environment for parallel realization.The computational results demonstrate high parallel efficiency of the presented algorithm and give a good solution when almost optimal density function is used as a transition density.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a modification of the familiar cut function by replacing the linear part in its definition by a polynomial of degree p + 1 obtaining thus a sigmoid function called generalized cut function of degree p + 1 (GCFP). We then study the uniform approximation of the (GCFP) by smooth sigmoid functions such as the logistic and the shifted logistic functions. The limiting case of the interval-valued Heaviside step function is also discussed which imposes the use of Hausdorff metric. Numerical examples are presented using CAS MATHEMATICA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this article is to draw attention to calculations on the environmental effects of agriculture and to the definition of marginal agricultural yield. When calculating the environmental impacts of agricultural activities, the real environmental load generated by agriculture is not revealed properly through ecological footprint indicators, as the type of agricultural farming (thus the nature of the pollution it creates) is not incorporated in the calculation. It is commonly known that extensive farming uses relatively small amounts of labor and capital. It produces a lower yield per unit of land and thus requires more land than intensive farming practices to produce similar yields, so it has a larger crop and grazing footprint. However, intensive farms, to achieve higher yields, apply fertilizers, insecticides, herbicides, etc., and cultivation and harvesting are often mechanized. In this study, the focus is on highlighting the differences in the environmental impacts of extensive and intensive farming practices through a statistical analysis of the factors determining agricultural yield. A marginal function is constructed for the relation between chemical fertilizer use and yield per unit fertilizer input. Furthermore, a proposal is presented for how calculation of the yield factor could possibly be improved. The yield factor used in the calculation of biocapacity is not the marginal yield for a given area, but is calculated from the real and actual yields, and this way biocapacity and the ecological footprint for cropland are equivalent. Calculations for cropland biocapacity do not show the area needed for sustainable production, but rather the actual land area used for agricultural production. The proposal the authors present is a modification of the yield factor and also the changed biocapacity is calculated. The results of statistical analyses reveal the need for a clarification of the methodology for calculating marginal yield, which could clearly contribute to assessing the real environmental impacts of agriculture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this article is to draw attention to calculations on the environmental effects of agriculture and to the definition of marginal agricultural yield. When calculating the environmental impacts of agricultural activities, the real environmental load generated by agriculture is not revealed properly through ecological footprint indicators, as the type of agricultural farming (thus the nature of the pollution it creates) is not incorporated in the calculation. It is commonly known that extensive farming uses relatively small amounts of labor and capital. It produces a lower yield per unit of land and thus requires more land than intensive farming practices to produce similar yields, so it has a larger crop and grazing footprint. However, intensive farms, to achieve higher yields, apply fertilizers, insecticides, herbicides, etc., and cultivation and harvesting are often mechanized. In this study, the focus is on highlighting the differences in the environmental impacts of extensive and intensive farming practices through a statistical analysis of the factors determining agricultural yield. A marginal function is constructed for the relation between chemical fertilizer use and yield per unit fertilizer input. Furthermore, a proposal is presented for how calculation of the yield factor could possibly be improved. The yield factor used in the calculation of biocapacity is not the marginal yield for a given area, but is calculated from the real and actual yields, and this way biocapacity and the ecological footprint for cropland are equivalent. Calculations for cropland biocapacity do not show the area needed for sustainable production, but rather the actual land area used for agricultural production. The proposal the authors present is a modification of the yield factor and also the changed biocapacity is calculated. The results of statistical analyses reveal the need for a clarification of the methodology for calculating marginal yield, which could clearly contribute to assessing the real environmental impacts of agriculture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prices of U.S. Treasury securities vary over time and across maturities. When the market in Treasurys is sufficiently complete and frictionless, these prices may be modeled by a function time and maturity. A cross-section of this function for time held fixed is called the yield curve; the aggregate of these sections is the evolution of the yield curve. This dissertation studies aspects of this evolution. ^ There are two complementary approaches to the study of yield curve evolution here. The first is principal components analysis; the second is wavelet analysis. In both approaches both the time and maturity variables are discretized. In principal components analysis the vectors of yield curve shifts are viewed as observations of a multivariate normal distribution. The resulting covariance matrix is diagonalized; the resulting eigenvalues and eigenvectors (the principal components) are used to draw inferences about the yield curve evolution. ^ In wavelet analysis, the vectors of shifts are resolved into hierarchies of localized fundamental shifts (wavelets) that leave specified global properties invariant (average change and duration change). The hierarchies relate to the degree of localization with movements restricted to a single maturity at the base and general movements at the apex. Second generation wavelet techniques allow better adaptation of the model to economic observables. Statistically, the wavelet approach is inherently nonparametric while the wavelets themselves are better adapted to describing a complete market. ^ Principal components analysis provides information on the dimension of the yield curve process. While there is no clear demarkation between operative factors and noise, the top six principal components pick up 99% of total interest rate variation 95% of the time. An economically justified basis of this process is hard to find; for example a simple linear model will not suffice for the first principal component and the shape of this component is nonstationary. ^ Wavelet analysis works more directly with yield curve observations than principal components analysis. In fact the complete process from bond data to multiresolution is presented, including the dedicated Perl programs and the details of the portfolio metrics and specially adapted wavelet construction. The result is more robust statistics which provide balance to the more fragile principal components analysis. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Polynomial phase modulated (PPM) signals have been shown to provide improved error rate performance with respect to conventional modulation formats under additive white Gaussian noise and fading channels in single-input single-output (SISO) communication systems. In this dissertation, systems with two and four transmit antennas using PPM signals were presented. In both cases we employed full-rate space-time block codes in order to take advantage of the multipath channel. For two transmit antennas, we used the orthogonal space-time block code (OSTBC) proposed by Alamouti and performed symbol-wise decoding by estimating the phase coefficients of the PPM signal using three different methods: maximum-likelihood (ML), sub-optimal ML (S-ML) and the high-order ambiguity function (HAF). In the case of four transmit antennas, we used the full-rate quasi-OSTBC (QOSTBC) proposed by Jafarkhani. However, in order to ensure the best error rate performance, PPM signals were selected such as to maximize the QOSTBC’s minimum coding gain distance (CGD). Since this method does not always provide a unique solution, an additional criterion known as maximum channel interference coefficient (CIC) was proposed. Through Monte Carlo simulations it was shown that by using QOSTBCs along with the properly selected PPM constellations based on the CGD and CIC criteria, full diversity in flat fading channels and thus, low BER at high signal-to-noise ratios (SNR) can be ensured. Lastly, the performance of symbol-wise decoding for QOSTBCs was evaluated. In this case a quasi zero-forcing method was used to decouple the received signal and it was shown that although this technique reduces the decoding complexity of the system, there is a penalty to be paid in terms of error rate performance at high SNRs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional Optics has provided ways to compensate some common visual limitations (up to second order visual impairments) through spectacles or contact lenses. Recent developments in wavefront science make it possible to obtain an accurate model of the Point Spread Function (PSF) of the human eye. Through what is known as the "Wavefront Aberration Function" of the human eye, exact knowledge of the optical aberration of the human eye is possible, allowing a mathematical model of the PSF to be obtained. This model could be used to pre-compensate (inverse-filter) the images displayed on computer screens in order to counter the distortion in the user's eye. This project takes advantage of the fact that the wavefront aberration function, commonly expressed as a Zernike polynomial, can be generated from the ophthalmic prescription used to fit spectacles to a person. This allows the pre-compensation, or onscreen deblurring, to be done for various visual impairments, up to second order (commonly known as myopia, hyperopia, or astigmatism). The technique proposed towards that goal and results obtained using a lens, for which the PSF is known, that is introduced into the visual path of subjects without visual impairment will be presented. In addition to substituting the effect of spectacles or contact lenses in correcting the loworder visual limitations of the viewer, the significance of this approach is that it has the potential to address higher-order abnormalities in the eye, currently not correctable by simple means.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bayesian adaptive methods have been extensively used in psychophysics to estimate the point at which performance on a task attains arbitrary percentage levels, although the statistical properties of these estimators have never been assessed. We used simulation techniques to determine the small-sample properties of Bayesian estimators of arbitrary performance points, specifically addressing the issues of bias and precision as a function of the target percentage level. The study covered three major types of psychophysical task (yes-no detection, 2AFC discrimination and 2AFC detection) and explored the entire range of target performance levels allowed for by each task. Other factors included in the study were the form and parameters of the actual psychometric function Psi, the form and parameters of the model function M assumed in the Bayesian method, and the location of Psi within the parameter space. Our results indicate that Bayesian adaptive methods render unbiased estimators of any arbitrary point on psi only when M=Psi, and otherwise they yield bias whose magnitude can be considerable as the target level moves away from the midpoint of the range of Psi. The standard error of the estimator also increases as the target level approaches extreme values whether or not M=Psi. Contrary to widespread belief, neither the performance level at which bias is null nor that at which standard error is minimal can be predicted by the sweat factor. A closed-form expression nevertheless gives a reasonable fit to data describing the dependence of standard error on number of trials and target level, which allows determination of the number of trials that must be administered to obtain estimates with prescribed precision.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An accurate knowledge of the fluorescence yield and its dependence on atmospheric properties such as pressure, temperature or humidity is essential to obtain a reliable measurement of the primary energy of cosmic rays in experiments using the fluorescence technique. In this work, several sets of fluorescence yield data (i.e. absolute value and quenching parameters) are described and compared. A simple procedure to study the effect of the assumed fluorescence yield on the reconstructed shower parameters (energy and shower maximum depth) as a function of the primary features has been developed. As an application, the effect of water vapor and temperature dependence of the collisional cross section on the fluorescence yield and its impact on the reconstruction of primary energy and shower maximum depth has been studied. Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La programmation par contraintes est une technique puissante pour résoudre, entre autres, des problèmes d’ordonnancement de grande envergure. L’ordonnancement vise à allouer dans le temps des tâches à des ressources. Lors de son exécution, une tâche consomme une ressource à un taux constant. Généralement, on cherche à optimiser une fonction objectif telle la durée totale d’un ordonnancement. Résoudre un problème d’ordonnancement signifie trouver quand chaque tâche doit débuter et quelle ressource doit l’exécuter. La plupart des problèmes d’ordonnancement sont NP-Difficiles. Conséquemment, il n’existe aucun algorithme connu capable de les résoudre en temps polynomial. Cependant, il existe des spécialisations aux problèmes d’ordonnancement qui ne sont pas NP-Complet. Ces problèmes peuvent être résolus en temps polynomial en utilisant des algorithmes qui leur sont propres. Notre objectif est d’explorer ces algorithmes d’ordonnancement dans plusieurs contextes variés. Les techniques de filtrage ont beaucoup évolué dans les dernières années en ordonnancement basé sur les contraintes. La proéminence des algorithmes de filtrage repose sur leur habilité à réduire l’arbre de recherche en excluant les valeurs des domaines qui ne participent pas à des solutions au problème. Nous proposons des améliorations et présentons des algorithmes de filtrage plus efficaces pour résoudre des problèmes classiques d’ordonnancement. De plus, nous présentons des adaptations de techniques de filtrage pour le cas où les tâches peuvent être retardées. Nous considérons aussi différentes propriétés de problèmes industriels et résolvons plus efficacement des problèmes où le critère d’optimisation n’est pas nécessairement le moment où la dernière tâche se termine. Par exemple, nous présentons des algorithmes à temps polynomial pour le cas où la quantité de ressources fluctue dans le temps, ou quand le coût d’exécuter une tâche au temps t dépend de t.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many rainfed wheat production systems are reliant on stored soil water for some or all of their water inputs. Selection and breeding for root traits could result in a yield benefit; however, breeding for root traits has traditionally been avoided due to the difficulty of phenotyping mature root systems, limited understanding of root system development and function, and the strong influence of environmental conditions on the phenotype of the mature root system. This paper outlines an international field selection program for beneficial root traits at maturity using soil coring in India and Australia. In the rainfed areas of India, wheat is sown at the end of the monsoon into hot soils with a quickly receding soil water profile; in season water inputs are minimal. We hypothesised that wheat selected and bred for high yield under these conditions would have deep, vigorous root systems, allowing them to access and utilise the stored soil water at depth around anthesis and grain-filling when surface layers were dry. The Indian trials resulted in 49 lines being sent to Australia for phenotyping. These lines were ranked against 41 high yielding Australian lines. Variation was observed for deep root traits e.g. in eastern Australia in 2012, maximum depth ranged from 118.8 to 146.3 cm. There was significant variation for root traits between sites and years, however, several Indian genotypes were identified that consistently ranked highly across sites and years for deep rooting traits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim was to evaluate the relationship between orofacial function, dentofacial morphology, and bite force in young subjects. Three hundred and sixteen subjects were divided according to dentition stage (early, intermediate, and late mixed and permanent dentition). Orofacial function was screened using the Nordic Orofacial Test-Screening (NOT-S). Orthodontic treatment need, bite force, lateral and frontal craniofacial dimensions and presence of sleep bruxism were also assessed. The results were submitted to descriptive statistics, normality and correlation tests, analysis of variance, and multiple linear regression to test the relationship between NOT-S scores and the studied independent variables. The variance of NOT-S scores between groups was not significant. The evaluation of the variables that significantly contributed to NOT-S scores variation showed that age and presence of bruxism related to higher NOT-S total scores, while the increase in overbite measurement and presence of closed lip posture related to lower scores. Bite force did not show a significant relationship with scores of orofacial dysfunction. No significant correlations between craniofacial dimensions and NOT-S scores were observed. Age and sleep bruxism were related to higher NOT-S scores, while the increase in overbite measurement and closed lip posture contributed to lower scores of orofacial dysfunction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this study, we investigated the effect of low density lipoprotein receptor (LDLr) deficiency on gap junctional connexin 36 (Cx36) islet content and on the functional and growth response of pancreatic beta-cells in C57BL/6 mice fed a high-fat (HF) diet. After 60 days on regular or HF diet, the metabolic state and morphometric islet parameters of wild-type (WT) and LDLr-/- mice were assessed. HF diet-fed WT animals became obese and hypercholesterolaemic as well as hyperglycaemic, hyperinsulinaemic, glucose intolerant and insulin resistant, characterizing them as prediabetic. Also they showed a significant decrease in beta-cell secretory response to glucose. Overall, LDLr-/- mice displayed greater susceptibility to HF diet as judged by their marked cholesterolaemia, intolerance to glucose and pronounced decrease in glucose-stimulated insulin secretion. HF diet induced similarly in WT and LDLr-/- mice, a significant decrease in Cx36 beta-cell content as revealed by immunoblotting. Prediabetic WT mice displayed marked increase in beta-cell mass mainly due to beta-cell hypertrophy/replication. Nevertheless, HF diet-fed LDLr-/- mice showed no significant changes in beta-cell mass, but lower islet-duct association (neogenesis) and higher beta-cell apoptosis index were seen as compared to controls. The higher metabolic susceptibility to HF diet of LDLr-/- mice may be explained by a deficiency in insulin secretory response to glucose associated with lack of compensatory beta-cell expansion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aimed to evaluate long-term atrophy in contralateral hippocampal volume after surgery for unilateral MTLE, as well as the cognitive outcome for patients submitted to either selective transsylvian amygdalohippocampectomy (SelAH) or anterior temporal lobe resection (ATL). We performed a longitudinal study of 47 patients with MRI signs of unilateral hippocampal sclerosis (23 patients with right-sided hippocampal sclerosis) who underwent surgical treatment for MTLE. They underwent preoperative/postoperative high-resolution MRI as well as neuropsychological assessment for memory and estimated IQ. To investigate possible changes in the contralateral hippocampus of patients, we included 28 controls who underwent two MRIs at long-term intervals. The volumetry using preoperative MRI showed significant hippocampal atrophy ipsilateral to the side of surgery when compared with controls (p<0.0001) but no differences in contralateral hippocampal volumes. The mean postoperative follow-up was 8.7 years (± 2.5 SD; median=8.0). Our patients were classified as Engel I (80%), Engel II (18.2%), and Engel III (1.8%). We observed a small but significant reduction in the contralateral hippocampus of patients but no volume changes in controls. Most of the patients presented small declines in both estimated IQ and memory, which were more pronounced in patients with left TLE and in those with persistent seizures. Different surgical approaches did not impose differences in seizure control or in cognitive outcome. We observed small declines in cognitive scores with most of these patients, which were worse in patients with left-sided resection and in those who continued to suffer from postoperative seizures. We also demonstrated that manual volumetry can reveal a reduction in volume in the contralateral hippocampus, although this change was mild and could not be detected by visual analysis. These new findings suggest that dynamic processes continue to act after the removal of the hippocampus, and further studies with larger groups may help in understanding the underlying mechanisms.