897 resultados para stochastic dominance constraints
Resumo:
This data set contains aboveground community plant biomass (Sown plant community, Weed plant community, Dead plant material, and Unidentified plant material; all measured in biomass as dry weight) and species-specific biomass from the sown species of the dominance experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the dominance experiment, 206 grassland plots of 3.5 x 3.5 m were established from a pool of 9 plant species that can be dominant in semi-natural grassland communities of the study region. In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 3, 4, 6, and 9 species). Plots were maintained by bi-annual weeding and mowing. Aboveground community biomass was harvested twice in May and August 2004 on all experimental plots of the dominance experiment. This was done by clipping the vegetation at 3 cm above ground in two rectangles of 0.2 x 0.5 m per experimental plot. The location of these rectangles was assigned by random selection of coordinates within the central area of the plots (excluding an outer edge of 50cm). The positions of the rectangles within plots were identical for all plots. The harvested biomass was sorted into categories: individual species for the sown plant species, weed plant species (species not sown at the particular plot), detached dead plant material, and remaining plant material that could not be assigned to any category. All biomass was dried to constant weight (70°C, >= 48 h) and weighed. Sown plant community biomass was calculated as the sum of the biomass of the individual sown species. The mean of both samples per plot and the individual measurements are provided in the data file. Overall, analyses of the community biomass data have identified species richness and the presence of particular species as an important driver of a positive biodiversity-productivity relationship.
Resumo:
This data set contains aboveground community plant biomass (Sown plant community, Weed plant community, Dead plant material, and Unidentified plant material; all measured in biomass as dry weight) and species-specific biomass from the sown species of the dominance experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the dominance experiment, 206 grassland plots of 3.5 x 3.5 m were established from a pool of 9 plant species that can be dominant in semi-natural grassland communities of the study region. In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 3, 4, 6, and 9 species). Plots were maintained by bi-annual weeding and mowing. Aboveground community biomass was harvested twice in May and August 2005 on all experimental plots of the dominance experiment. This was done by clipping the vegetation at 3 cm above ground in two rectangles of 0.2 x 0.5 m per experimental plot. The location of these rectangles was assigned by random selection of coordinates within the central area of the plots (excluding an outer edge of 50cm). The positions of the rectangles within plots were identical for all plots. The harvested biomass was sorted into categories: individual species for the sown plant species, weed plant species (species not sown at the particular plot), detached dead plant material, and remaining plant material that could not be assigned to any category. All biomass was dried to constant weight (70°C, >= 48 h) and weighed. Sown plant community biomass was calculated as the sum of the biomass of the individual sown species. The mean of both samples per plot and the individual measurements are provided in the data file. Overall, analyses of the community biomass data have identified species richness and the presence of particular species as an important driver of a positive biodiversity-productivity relationship.
Resumo:
In multi-attribute utility theory, it is often not easy to elicit precise values for the scaling weights representing the relative importance of criteria. A very widespread approach is to gather incomplete information. A recent approach for dealing with such situations is to use information about each alternative?s intensity of dominance, known as dominance measuring methods. Different dominancemeasuring methods have been proposed, and simulation studies have been carried out to compare these methods with each other and with other approaches but only when ordinal information about weights is available. In this paper, we useMonte Carlo simulation techniques to analyse the performance of and adapt such methods to deal with weight intervals, weights fitting independent normal probability distributions orweights represented by fuzzy numbers.Moreover, dominance measuringmethod performance is also compared with a widely used methodology dealing with incomplete information on weights, the stochastic multicriteria acceptability analysis (SMAA). SMAA is based on exploring the weight space to describe the evaluations that would make each alternative the preferred one.
Resumo:
This paper analyses the mechanisms through which binding finance constraints can induce debt-constrained firms to improve technical efficiency to guarantee positive profits. This hypothesis is tested on a sample of firms belonging to the Italian manufacturing. Technical efficiency scores are computed by estimating parametric production frontiers using the one stage approach as in Battese and Coelli [Battese, G., Coelli, T., 1995. A model for technical efficiency effects in a stochastic frontier production function for panel data. Empirical Economics 20, 325-332]. The results support the hypothesis that a restriction in the availability of financial resources can affect positively efficiency. © 2004 Elsevier B.V. All rights reserved.
Resumo:
* This research was supported by a grant from the Greek Ministry of Industry and Technology.
Resumo:
Access to healthcare is a major problem in which patients are deprived of receiving timely admission to healthcare. Poor access has resulted in significant but avoidable healthcare cost, poor quality of healthcare, and deterioration in the general public health. Advanced Access is a simple and direct approach to appointment scheduling in which the majority of a clinic's appointments slots are kept open in order to provide access for immediate or same day healthcare needs and therefore, alleviate the problem of poor access the healthcare. This research formulates a non-linear discrete stochastic mathematical model of the Advanced Access appointment scheduling policy. The model objective is to maximize the expected profit of the clinic subject to constraints on minimum access to healthcare provided. Patient behavior is characterized with probabilities for no-show, balking, and related patient choices. Structural properties of the model are analyzed to determine whether Advanced Access patient scheduling is feasible. To solve the complex combinatorial optimization problem, a heuristic that combines greedy construction algorithm and neighborhood improvement search was developed. The model and the heuristic were used to evaluate the Advanced Access patient appointment policy compared to existing policies. Trade-off between profit and access to healthcare are established, and parameter analysis of input parameters was performed. The trade-off curve is a characteristic curve and was observed to be concave. This implies that there exists an access level at which at which the clinic can be operated at optimal profit that can be realized. The results also show that, in many scenarios by switching from existing scheduling policy to Advanced Access policy clinics can improve access without any decrease in profit. Further, the success of Advanced Access policy in providing improved access and/or profit depends on the expected value of demand, variation in demand, and the ratio of demand for same day and advanced appointments. The contributions of the dissertation are a model of Advanced Access patient scheduling, a heuristic to solve the model, and the use of the model to understand the scheduling policy trade-offs which healthcare clinic managers must make. ^
Resumo:
This data set contains aboveground community plant biomass (Sown plant community, Weed plant community, Dead plant material, and Unidentified plant material; all measured in biomass as dry weight) and species-specific biomass from the sown species of the dominance experiment plots of a large grassland biodiversity experiment (the Jena Experiment; see further details below). In the dominance experiment, 206 grassland plots of 3.5 x 3.5 m were established from a pool of 9 plant species that can be dominant in semi-natural grassland communities of the study region. In May 2002, varying numbers of plant species from this species pool were sown into the plots to create a gradient of plant species richness (1, 2, 3, 4, 6, and 9 species). Plots were maintained by bi-annual weeding and mowing. Aboveground community biomass was harvested twice in May and August 2008 on all experimental plots of the dominance experiment. This was done by clipping the vegetation at 3 cm above ground in two rectangles of 0.2 x 0.5 m per experimental plot. The location of these rectangles was assigned by random selection of coordinates within the central area of the plots (excluding an outer edge of 50cm). The positions of the rectangles within plots were identical for all plots. The harvested biomass was sorted into categories: individual species for the sown plant species, weed plant species (species not sown at the particular plot), detached dead plant material, and remaining plant material that could not be assigned to any category. All biomass was dried to constant weight (70°C, >= 48 h) and weighed. Sown plant community biomass was calculated as the sum of the biomass of the individual sown species. The mean of both samples per plot and the individual measurements are provided in the data file. Overall, analyses of the community biomass data have identified species richness and the presence of particular species as an important driver of a positive biodiversity-productivity relationship.
Resumo:
Rising anthropogenic CO2 in the atmosphere is accompanied by an increase in oceanic CO2 and a concomitant decline in seawater pH (ref. 1). This phenomenon, known as ocean acidification (OA), has been experimentally shown to impact the biology and ecology of numerous animals and plants2, most notably those that precipitate calcium carbonate skeletons, such as reef-building corals3. Volcanically acidified water at Maug, Commonwealth of the Northern Mariana Islands (CNMI) is equivalent to near-future predictions for what coral reef ecosystems will experience worldwide due to OA. We provide the first chemical and ecological assessment of this unique site and show that acidification-related stress significantly influences the abundance and diversity of coral reef taxa, leading to the often-predicted shift from a coral to an algae-dominated state4, 5. This study provides field evidence that acidification can lead to macroalgae dominance on reefs.
Resumo:
We know that classical thermodynamics even out of equilibrium always leads to stable situation which means degradation and consequently d sorder. Many experimental evidences in different fields show that gradation and order (symmetry breaking) during time and space evolution may appear when maintaining the system far from equilibrium. Order through fluctuations, stochastic processes which occur around critical points and dissipative structures are the fundamental background of the Prigogine-Glansdorff and Nicolis theory. The thermodynamics of macroscopic fluctuations to stochastic approach as well as the kinetic deterministic laws allow a better understanding of the peculiar fascinating behavior of organized matter. The reason for the occurence of this situation is directly related to intrinsic non linearities of the different mechanisms responsible for the evolution of the system. Moreover, when dealing with interfaces separating two immiscible phases (liquid - gas, liquid -liquid, liquid - solid, solid - solid), the situation is rather more complicated. Indeed coupling terms playing the major role in the conditions of instability arise from the peculiar singular static and dynamic properties of the surface and of its vicinity. In other words, the non linearities are not only intrinsic to classical steps involving feedbacks, but they may be imbedded with the non-autonomous character of the surface properties. In order to illustrate our goal we discuss three examples of ordering in far from equilibrium conditions: i) formation of chemical structures during the oxidation of metals and alloys; ii) formation of mechanical structures during the oxidation of metals iii) formation of patterns at a solid-liquid moving interface due to supercooling condition in a melt of alloy. © 1984, Walter de Gruyter. All rights reserved.
Resumo:
This paper applies a stochastic viability approach to a tropical small-scale fishery, offering a theoretical and empirical example of ecosystem-based fishery management approach that accounts for food security. The model integrates multi-species, multi-fleet and uncertainty as well as profitability, food production, and demographic growth. It is calibrated over the period 2006–2010 using monthly catch and effort data from the French Guiana's coastal fishery, involving thirteen species and four fleets. Using projections at the horizon 2040, different management strategies and scenarios are compared from a viability viewpoint, thus accounting for biodiversity preservation, fleet profitability and food security. The analysis shows that under certain conditions, viable options can be identified which allow fishing intensity and production to be increased to respond to food security requirements but with minimum impacts on the marine resources.
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Resumo:
Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.
Resumo:
Les métaheuristiques sont très utilisées dans le domaine de l'optimisation discrète. Elles permettent d’obtenir une solution de bonne qualité en un temps raisonnable, pour des problèmes qui sont de grande taille, complexes, et difficiles à résoudre. Souvent, les métaheuristiques ont beaucoup de paramètres que l’utilisateur doit ajuster manuellement pour un problème donné. L'objectif d'une métaheuristique adaptative est de permettre l'ajustement automatique de certains paramètres par la méthode, en se basant sur l’instance à résoudre. La métaheuristique adaptative, en utilisant les connaissances préalables dans la compréhension du problème, des notions de l'apprentissage machine et des domaines associés, crée une méthode plus générale et automatique pour résoudre des problèmes. L’optimisation globale des complexes miniers vise à établir les mouvements des matériaux dans les mines et les flux de traitement afin de maximiser la valeur économique du système. Souvent, en raison du grand nombre de variables entières dans le modèle, de la présence de contraintes complexes et de contraintes non-linéaires, il devient prohibitif de résoudre ces modèles en utilisant les optimiseurs disponibles dans l’industrie. Par conséquent, les métaheuristiques sont souvent utilisées pour l’optimisation de complexes miniers. Ce mémoire améliore un procédé de recuit simulé développé par Goodfellow & Dimitrakopoulos (2016) pour l’optimisation stochastique des complexes miniers stochastiques. La méthode développée par les auteurs nécessite beaucoup de paramètres pour fonctionner. Un de ceux-ci est de savoir comment la méthode de recuit simulé cherche dans le voisinage local de solutions. Ce mémoire implémente une méthode adaptative de recherche dans le voisinage pour améliorer la qualité d'une solution. Les résultats numériques montrent une augmentation jusqu'à 10% de la valeur de la fonction économique.
Resumo:
The current dominance of African runners in long-distance running is an intriguing phenomenon that highlights the close relationship between genetics and physical performance. Many factors in the interesting interaction between genotype and phenotype (eg, high cardiorespiratory fitness, higher hemoglobin concentration, good metabolic efficiency, muscle fiber composition, enzyme profile, diet, altitude training, and psychological aspects) have been proposed in the attempt to explain the extraordinary success of these runners. Increasing evidence shows that genetics may be a determining factor in physical and athletic performance. But, could this also be true for African long-distance runners? Based on this question, this brief review proposed the role of genetic factors (mitochondrial deoxyribonucleic acid, the Y chromosome, and the angiotensin-converting enzyme and the alpha-actinin-3 genes) in the amazing athletic performance observed in African runners, especially the Kenyans and Ethiopians, despite their environmental constraints.