861 resultados para Stochastic Frontier Production Function
Resumo:
This analysis paper presents previously unknown properties of some special cases of the Wright function whose consideration is necessitated by our work on probability theory and the theory of stochastic processes. Specifically, we establish new asymptotic properties of the particular Wright function 1Ψ1(ρ, k; ρ, 0; x) = X∞ n=0 Γ(k + ρn) Γ(ρn) x n n! (|x| < ∞) when the parameter ρ ∈ (−1, 0)∪(0, ∞) and the argument x is real. In the probability theory applications, which are focused on studies of the Poisson-Tweedie mixtures, the parameter k is a non-negative integer. Several representations involving well-known special functions are given for certain particular values of ρ. The asymptotics of 1Ψ1(ρ, k; ρ, 0; x) are obtained under numerous assumptions on the behavior of the arguments k and x when the parameter ρ is both positive and negative. We also provide some integral representations and structural properties involving the ‘reduced’ Wright function 0Ψ1(−−; ρ, 0; x) with ρ ∈ (−1, 0) ∪ (0, ∞), which might be useful for the derivation of new properties of members of the power-variance family of distributions. Some of these imply a reflection principle that connects the functions 0Ψ1(−−;±ρ, 0; ·) and certain Bessel functions. Several asymptotic relationships for both particular cases of this function are also given. A few of these follow under additional constraints from probability theory results which, although previously available, were unknown to analysts.
Resumo:
A series of 3 experiments were conducted to evaluate the use of microalgae as supplements for ruminants consuming low-CP tropical grasses. In Exp. 1, the chemical composition and in vitro protein degradability of 9 algae species and 4 protein supplements were determined. In Exp. 2, rumen function and microbial protein (MCP) production were determined in Bos indicus steers fed speargrass hay alone or supplemented with Spirulina platensis, Chlorella pyrenoidosa, Dunaliella salina, or cottonseed meal (CSM). In Exp. 3, DMI and ADG were determined in B. indicus steers fed speargrass hay alone or supplemented with increasing amounts of NPN (urea combined with ammonia sulfate), CSM, or S. platensis. In Exp. 1, the CP content of S. platensis and C. pyrenoidosa (675 and 580 g/kg DM) was highest among the algae species and higher than the other protein supplements evaluated, and Schizochytrium sp. had the highest crude lipid (CL) content (198 g/kg DM). In Exp. 2, S. platensis supplementation increased speargrass hay intake, the efficiency of MCP production, the fractional outflow rate of digesta from the rumen, the concentration of NH3N, and the molar proportion of branched-chain fatty acids in the rumen fluid of steers above all other treatments. Dunaliella salina acceptance by steers was low and this resulted in no significant difference to unsupplemented steers for all parameters measured for this algae supplement. In Exp. 3, ADG linearly increased with increasing supplementary N intake from both S. platensis and NPN, with no difference between the 2 supplements. In contrast, ADG quadratically increased with increasing supplementary N intake from CSM. It was concluded that S. platensis and C. pyrenoidosa may potentially be used as protein sources for cattle grazing low-CP pastures.
Resumo:
This thesis argues that the study of narrative television has been limited by an adherence to accepted and commonplace conceptions of endings as derived from literary theory, particularly a preoccupation with the terminus of the text as the ultimate site of cohesion, structure, and meaning. Such common conceptions of endings, this thesis argues, are largely incompatible with the realities of television’s production and reception, and as a result the study of endings in television needs to be re-thought to pay attention to the specificities of the medium. In this regard, this thesis proposes a model of intra-narrative endings, islands of cohesion, structure, and meaning located within television texts, as a possible solution to the problem of endings in television. These intra-narrative endings maintain the functionality of traditional endings, whilst also allowing for the specificities of television as a narrative medium. The first two chapters set out the theoretical groundwork, first by exploring the essential characteristics of narrative television (serialisation, fragmentation, duration, repetition, and accumulation), then by exploring the unique relationship between narrative television and the forces of contingency. These chapters also introduce the concept of intra-narrative endings as a possible solution to the problems of television’s narrative structure, and the medium’s relationship to contingency. Following on from this my three case studies examine forms of television which have either been traditionally defined as particularly resistant to closure (soap opera and the US sitcom) or which have received little analysis in terms of their narrative structure (sports coverage). Each of these case studies provides contextual material on these televisual forms, situating them in terms of their narrative structure, before moving on to analyse them in terms of my concept of intra-narrative endings. In the case of soap opera, the chapter focusses on the death of the long running character Pat Butcher in the British soap EastEnders (BBC, 1985-), while my chapter on the US sitcom focusses on the varying levels of closure that can be located within the US sitcom, using Friends (NBC, 1993-2004) as a particular example. Finally, my chapter on sports coverage analyses the BBC’s coverage of the 2012 London Olympics, and focusses on the narratives surrounding cyclists Chris Hoy and Victoria Pendleton. Each of these case studies identifies their chosen events as intra-narrative endings within larger, ongoing texts, and analyses the various ways in which they operate within those wider texts. This thesis is intended to make a contribution to the emerging field of endings studies within television by shifting the understanding of endings away from a dominant literary model which overwhelmingly focusses on the terminus of the text, to a more televisually specific model which pays attention to the particular contexts of the medium’s production and reception.
Resumo:
Dissertação de Mestrado, Ciências Biomédicas, 28 de Junho de 2016, Universidade dos Açores.
Resumo:
Denitrification is a microbially-mediated process that converts nitrate (NO3-) to dinitrogen (N2) gas and has implications for soil fertility, climate change, and water quality. Using PCR, qPCR, and T-RFLP, the effects of environmental drivers and land management on the abundance and composition of functional genes were investigated. Environmental variables affecting gene abundance were soil type, soil depth, nitrogen concentrations, soil moisture, and pH, although each gene was unique in its spatial distribution and controlling factors. The inclusion of microbial variables, specifically genotype and gene abundance, improved denitrification models and highlights the benefit of including microbial data in modeling denitrification. Along with some evidence of niche selection, I show that nirS is a good predictor of denitrification enzyme activity (DEA) and N2O:N2 ratio, especially in alkaline and wetland soils. nirK was correlated to N2O production and became a stronger predictor of DEA in acidic soils, indicating that nirK and nirS are not ecologically redundant.
Resumo:
Crop models are simplified mathematical representations of the interacting biological and environmental components of the dynamic soil–plant–environment system. Sorghum crop modeling has evolved in parallel with crop modeling capability in general, since its origins in the 1960s and 1970s. Here we briefly review the trajectory in sorghum crop modeling leading to the development of advanced models. We then (i) overview the structure and function of the sorghum model in the Agricultural Production System sIMulator (APSIM) to exemplify advanced modeling concepts that suit both agronomic and breeding applications, (ii) review an example of use of sorghum modeling in supporting agronomic management decisions, (iii) review an example of the use of sorghum modeling in plant breeding, and (iv) consider implications for future roles of sorghum crop modeling. Modeling and simulation provide an avenue to explore consequences of crop management decision options in situations confronted with risks associated with seasonal climate uncertainties. Here we consider the possibility of manipulating planting configuration and density in sorghum as a means to manipulate the productivity–risk trade-off. A simulation analysis of decision options is presented and avenues for its use with decision-makers discussed. Modeling and simulation also provide opportunities to improve breeding efficiency by either dissecting complex traits to more amenable targets for genetics and breeding, or by trait evaluation via phenotypic prediction in target production regions to help prioritize effort and assess breeding strategies. Here we consider studies on the stay-green trait in sorghum, which confers yield advantage in water-limited situations, to exemplify both aspects. The possible future roles of sorghum modeling in agronomy and breeding are discussed as are opportunities related to their synergistic interaction. The potential to add significant value to the revolution in plant breeding associated with genomic technologies is identified as the new modeling frontier.
Resumo:
The transition period is associated with the peak incidence of production problems, metabolic disorders and infectious diseases in dairy cows (Drackley, 1999). During this time the cow’s immune system seems to be weakened; it is apparent that metabolic challenges associated with the onset of lactation are factors capable of affecting immune function. However, the reasons for this state are not entirely clear (Goff, 2006). The negative energy balance associated with parturition leads to extensive mobilization of fatty acids stored in adipose tissue, thus, causing marked elevations in blood non-esterified fatty acids (NEFA) and B-hydroxybutyrate (BHBA) concentrations (Drackley et al., 2001). Prepartal level of dietary energy can potentially affect adipose tissue deposition and, thus, the amount of NEFA released into blood and available for metabolism in liver (Drackley et al., 2005). The current feeding practices for pregnant non-lactating cows has been called into question because increasing amounts of moderate-to-high energy diets (i.e. those more similar to lactation diets in the content of energy) during the last 3 wk postpartum have largely failed to overcome peripartal health problems, excessive body condition loss after calving, or declining fertility (Beever, 2006). Current prepartal feeding practices can lead to elevated intakes of energy, which can increase fat deposition in the viscera and upon parturition lead to compromised liver metabolism (Beever, 2006, Drackley et al., 2005). Our general hypothesis was that overfeeding dietary energy during the dry period, accompanied by the metabolic challenges associated with the onset of lactation would render the cow’s immune function less responsive early postpartum. The chapters in this dissertation evaluated neutrophil function, metabolic and inflammation indices and gene expression affected by the plane of dietary energy prepartum and an early post-partum inflammatory challenge in dairy cows. The diet effect in this experiment was transcendental during the transition period and potentially during the entire lactation. Changes in energy balance were observed and provided a good model to study the challenges associated with the onset of lactation. Overall the LPS model provided a consistent response representing an inflammation incident; however the changes in metabolic indices were sudden and hard to detect in most of the cases during the days following the challenge. In general overfeeding dietary energy during the dry period resulted in a less responsive immune function during the early postpartum. In other words, controlling the dietary energy prepartum has more benefits for the dairy cow during transition.
Resumo:
Water regimes in the Brazilian Cerrados are sensitive to climatological disturbances and human intervention. The risk that critical water-table levels are exceeded over long periods of time can be estimated by applying stochastic methods in modeling the dynamic relationship between water levels and driving forces such as precipitation and evapotranspiration. In this study, a transfer function-noise model, the so called PIRFICT-model, is applied to estimate the dynamic relationship between water-table depth and precipitation surplus/deficit in a watershed with a groundwater monitoring scheme in the Brazilian Cerrados. Critical limits were defined for a period in the Cerrados agricultural calendar, the end of the rainy season, when extremely shallow levels (< 0.5-m depth) can pose a risk to plant health and machinery before harvesting. By simulating time-series models, the risk of exceeding critical thresholds during a continuous period of time (e.g. 10 days) is described by probability levels. These simulated probabilities were interpolated spatially using universal kriging, incorporating information related to the drainage basin from a digital elevation model. The resulting map reduced model uncertainty. Three areas were defined as presenting potential risk at the end of the rainy season. These areas deserve attention with respect to water-management and land-use planning.
Resumo:
Many geological formations consist of crystalline rocks that have very low matrix permeability but allow flow through an interconnected network of fractures. Understanding the flow of groundwater through such rocks is important in considering disposal of radioactive waste in underground repositories. A specific area of interest is the conditioning of fracture transmissivities on measured values of pressure in these formations. This is the process where the values of fracture transmissivities in a model are adjusted to obtain a good fit of the calculated pressures to measured pressure values. While there are existing methods to condition transmissivity fields on transmissivity, pressure and flow measurements for a continuous porous medium there is little literature on conditioning fracture networks. Conditioning fracture transmissivities on pressure or flow values is a complex problem because the measurements are not linearly related to the fracture transmissivities and they are also dependent on all the fracture transmissivities in the network. We present a new method for conditioning fracture transmissivities on measured pressure values based on the calculation of certain basis vectors; each basis vector represents the change to the log transmissivity of the fractures in the network that results in a unit increase in the pressure at one measurement point whilst keeping the pressure at the remaining measurement points constant. The fracture transmissivities are updated by adding a linear combination of basis vectors and coefficients, where the coefficients are obtained by minimizing an error function. A mathematical summary of the method is given. This algorithm is implemented in the existing finite element code ConnectFlow developed and marketed by Serco Technical Services, which models groundwater flow in a fracture network. Results of the conditioning are shown for a number of simple test problems as well as for a realistic large scale test case.
Resumo:
International audience
Resumo:
The current Amazon landscape consists of heterogeneous mosaics formed by interactions between the original forest and productive activities. Recognizing and quantifying the characteristics of these landscapes is essential for understanding agricultural production chains, assessing the impact of policies, and in planning future actions. Our main objective was to construct the regionalization of agricultural production for Rondônia State (Brazilian Amazon) at the municipal level. We adopted a decision tree approach, using land use maps derived from remote sensing data (PRODES and TerraClass) combined with socioeconomic data. The decision trees allowed us to allocate municipalities to one of five agricultural production systems: (i) coexistence of livestock production and intensive agriculture; (ii) semi-intensive beef and milk production; (iii) semi-intensive beef production; (iv) intensive beef and milk production, and; (v) intensive beef production. These production systems are, respectively, linked to mechanized agriculture (i), traditional cattle farming with low management, with (ii) or without (iii) a significant presence of dairy farming, and to more intensive livestock farming with (iv) or without (v) a significant presence of dairy farming. The municipalities and associated production systems were then characterized using a wide variety of quantitative metrics grouped into four dimensions: (i) agricultural production; (ii) economics; (iii) territorial configuration, and; (iv) social characteristics. We found that production systems linked to mechanized agriculture predominate in the south of the state, while intensive farming is mainly found in the center of the state. Semi-intensive livestock farming is mainly located close to the southwest frontier and in the north of the state, where human occupation of the territory is not fully consolidated. This distributional pattern reflects the origins of the agricultural production system of Rondônia. Moreover, the characterization of the production systems provides insights into the pattern of occupation of the Amazon and the socioeconomic consequences of continuing agricultural expansion.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.