794 resultados para Niche of market
Resumo:
To maintain a strict balance between demand and supply in the US power systems, the Independent System Operators (ISOs) schedule power plants and determine electricity prices using a market clearing model. This model determines for each time period and power plant, the times of startup, shutdown, the amount of power production, and the provisioning of spinning and non-spinning power generation reserves, etc. Such a deterministic optimization model takes as input the characteristics of all the generating units such as their power generation installed capacity, ramp rates, minimum up and down time requirements, and marginal costs for production, as well as the forecast of intermittent energy such as wind and solar, along with the minimum reserve requirement of the whole system. This reserve requirement is determined based on the likelihood of outages on the supply side and on the levels of error forecasts in demand and intermittent generation. With increased installed capacity of intermittent renewable energy, determining the appropriate level of reserve requirements has become harder. Stochastic market clearing models have been proposed as an alternative to deterministic market clearing models. Rather than using a fixed reserve targets as an input, stochastic market clearing models take different scenarios of wind power into consideration and determine reserves schedule as output. Using a scaled version of the power generation system of PJM, a regional transmission organization (RTO) that coordinates the movement of wholesale electricity in all or parts of 13 states and the District of Columbia, and wind scenarios generated from BPA (Bonneville Power Administration) data, this paper explores a comparison of the performance between a stochastic and deterministic model in market clearing. The two models are compared in their ability to contribute to the affordability, reliability and sustainability of the electricity system, measured in terms of total operational costs, load shedding and air emissions. The process of building the models and running for tests indicate that a fair comparison is difficult to obtain due to the multi-dimensional performance metrics considered here, and the difficulty in setting up the parameters of the models in a way that does not advantage or disadvantage one modeling framework. Along these lines, this study explores the effect that model assumptions such as reserve requirements, value of lost load (VOLL) and wind spillage costs have on the comparison of the performance of stochastic vs deterministic market clearing models.
Resumo:
Hutchinson's (1957; Cold Spring Harbour Symp Quant Biol 22:415-427) niche concept is being used increasingly in the context of global change, and is currently applied to many ecological issues including climate change, exotic species invasion and management of endangered species. For both the marine and terrestrial realms, there is a growing need to assess the breadth of the niches of individual species and to make comparisons among them to forecast the species' capabilities to adapt to global change. In this paper, we describe simple non-parametric multivariate procedures derived from a method originally used in climatology to (1) evaluate the breadth of the ecological niche of a species and (2) examine whether the niches are significantly separated. We first applied the statistical procedures to a simple fictive example of 3 species separated by 2 environmental factors in order to describe the technique. We then used it to quantify and compare the ecological niche of 2 key-structural marine zooplankton copepod species, Calanus finmarchicus and C. helgolandicus, in the northern part of the North Atlantic Ocean using 3 environmental factors. The test demonstrates that the niches of both species are significantly separated and that the coldwater species has a niche larger than that of its warmer-water congeneric species.
Resumo:
Objective
To investigate the effect of fast food consumption on mean population body mass index (BMI) and explore the possible influence of market deregulation on fast food consumption and BMI.
Methods
The within-country association between fast food consumption and BMI in 25 high-income member countries of the Organisation for Economic Co-operation and Development between 1999 and 2008 was explored through multivariate panel regression models, after adjustment for per capita gross domestic product, urbanization, trade openness, lifestyle indicators and other covariates. The possible mediating effect of annual per capita intake of soft drinks, animal fats and total calories on the association between fast food consumption and BMI was also analysed. Two-stage least squares regression models were conducted, using economic freedom as an instrumental variable, to study the causal effect of fast food consumption on BMI.
Findings
After adjustment for covariates, each 1-unit increase in annual fast food transactions per capita was associated with an increase of 0.033 kg/m2 in age-standardized BMI (95% confidence interval, CI: 0.013–0.052). Only the intake of soft drinks – not animal fat or total calories – mediated the observed association (β: 0.030; 95% CI: 0.010–0.050). Economic freedom was an independent predictor of fast food consumption (β: 0.27; 95% CI: 0.16–0.37). When economic freedom was used as an instrumental variable, the association between fast food and BMI weakened but remained significant (β: 0.023; 95% CI: 0.001–0.045).
Conclusion
Fast food consumption is an independent predictor of mean BMI in high-income countries. Market deregulation policies may contribute to the obesity epidemic by facilitating the spread of fast food.
Resumo:
This paper extends original insights of resource-advantage theory (Hunt & Morgan, 1995) to a specific analysis of the moderators of the capabilities-performance relationship such as market orientation, marketing strategy and organizational power. Using established measures and a representative sample of UK firms drawn from Verhoef and Leeflang’s data (2009), our study tests new hypotheses to explain how different types of marketing capabilities contribute to firm performance. The application of resource-advantage theory advances theorising on both marketing and organisational antecedents of firm performance and the causal mechanisms by which competitive advantage is generated.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Master’s Double Degree in Finance and Financial Economics from NOVA – School of Business and Economics and Maastricht University
Resumo:
Tutkimuksen tavoitteena oli selvittää raaka-aineena käytettävän paloa hidastavan laminaattipaperin markkinapotentiaali sekä kysyntä Euroopassa. Näiden kehitystä arvioitiin analysoimalla kysyntään vaikuttavia tekijöitä. Tutkimusmetodologiassa yhdistyivät useat lähestymistavat, pääasiassa käytettiin kuvailevaa ja ennustavaa tutkimusotetta. Tutkimus perustui sekä primaari että sekundaaritietoon. Primaaritietoa hankittiin tuotteen käyttäjiltä, myyntiedustajilta sekä haastattelemalla tuottajayrityksen henkilökuntaa. Sekundaaritietoa kerättiin myös, mutta tutkimuksen tavoitteisiin liittyviä lähteitä ei ollut runsaasti saatavilla. Tästä syystä primaaritiedolla oli tutkimuksessa hieman tärkeämpi rooli kuin sekundaaritiedolla, mikä on yleistä teollisessa markkinatutkimuksessa. Tuotteen tulevaisuuden näkymät vaikuttavat melko hyviltä. Teoreettinen markkinapotentiaali on suuri verrattuna nykyiseen myyntimäärään, myyntimäärän kasvattaminen vaatii kuitenkin tiettyjä toimenpiteitä. Tulevaisuudessa huomiota tulisi kiinnittää tuotekuvaan, hinnoitteluun ja laadun kokonaisvaltaiseen maksimointiin. Tutkimuksessa havaittiin suuntauksia kysynnän kasvusta tulevien parin vuoden aikana. Myös teoreettinen markkinapotentiaali voisi kasvaa, koska paloa hidastavien laminaattien kysyntä vaikuttaa kasvavan Euroopassa erityisesti rakennusalalla.
Resumo:
This paper examines the factors associated with Canadian firms voluntarily disclosing climate change information through the Carbon Disclosure Project. Five hypotheses are presented to explain the factors influencing management's decision to disclose this information. These hypotheses include a response to shareholder activism, domestic institutional investor shareholder activism, signalling, litigation risk, and low cost publicity. Both binary logistic regressions as well as a cross-sectional analysis of the equity market's response to the environmental disclosures being made were used to test these hypotheses. Support was found for shareholder activism, low cost publicity, and litigation risk. However, the equity market's response was not found to be statistically significant.
Resumo:
The heavy metal contamination in the environment may lead to circumstances like bioaccumulation and inturn biomagnification. Hence cheaper and effective technologies are needed to protect the precious natural resources and biological lives. A suitable technique is the one which meets the technical and environmental criteria for dealing with a particular remediation problem and should be site-specific due to spatial and climatic variations and it may not economically feasible everywhere. The search for newer technologies for the environmental therapy, involving the removal of toxic metals from wastewaters has directed attention to adsorption, based on metal binding capacities of various adsorbent materials. Therefore, the present study aim to identify and evaluate the most current mathematical formulations describing sorption processes. Although vast amount of research has been carried out in the area of metal removal by adsorption process using activated carbon few specific research data are available in different scientific institutions. The present work highlights the seasonal and spatial variations in the distribution of some selected heavy metals among various geochemical phases of Cochin Estuarine system and also looked into an environmental theraptic/remedial approach by adsorption technique using activated charcoal and chitosan, to reduce and thereby controlling metallic pollution. The thesis has been addressed in seven chapters with further subdivisions. The first chapter is introductory, stating the necessity of reducing or preventing water pollution due to the hazardous impact on environment and health of living organisms and drawing it from a careful review of literature relevant to the present study. It provides a constricted description about the study area, geology, and general hydrology and also bears the major objectives and scope of the present study.
Resumo:
Neueste Entwicklungen in Technologien für dezentrale Energieversorgungsstrukturen, erneuerbare Energien, Großhandelsenergiemarkt, Mini- und Mikronetze, verteilte Intelligenz, sowie Informations- und Datenübertragungstechnologien werden die zukünftige Energiewelt maßgeblich bestimmen. Die derzeitigen Forschungsbemühungen zur Vernutzung aller dieser Technologien bilden die Voraussetzungen für ein zukünftiges, intelligentes Stromnetz. Dieses neue Konzept gründet sich auf die folgenden Säulen: Die Versorgung erfolgt durch dezentrale Erzeugungsanlagen und nicht mehr durch große zentrale Erzeuger; die Steuerung beeinflusst nicht mehr allein die Versorgung sondern ermöglich eine auch aktive Führung des Bedarf; die Eingabeparameter des Systems sind nicht mehr nur mechanische oder elektrische Kenngrößen sondern auch Preissignale; die erneuerbaren Energieträger sind nicht mehr nur angeschlossen, sondern voll ins Energienetz integriert. Die vorgelegte Arbeit fügt sich in dieses neue Konzept des intelligenten Stromnetz ein. Da das zukünftige Stromnetz dezentral konfiguriert sein wird, ist eine Übergangsphase notwendig. Dieser Übergang benötigt Technologien, die alle diese neue Konzepte in die derzeitigen Stromnetze integrieren können. Diese Arbeit beweist, dass ein Mininetz in einem Netzabschnitt mittlerer Größe als netzschützende Element wirken kann. Hierfür wurde ein neues Energiemanagementsystem für Mininetze – das CMS (englisch: Cluster Management System) – entwickelt. Diese CMS funktioniert als eine von ökonomischorientierte Betriebsoptimierung und wirkt wie eine intelligente Last auf das System ein, reagierend auf Preissignale. Sobald wird durch eine Frequenzsenkung eine Überlastung des Systems bemerkt, ändert das Mininetz sein Verhalten und regelt seine Belastung, um die Stabilisierung des Hauptnetzes zu unterstützen. Die Wirksamkeit und die Realisierbarkeit des einwickelten Konzept wurde mit Hilfe von Simulationen und erfolgreichen Laborversuchen bewiesen.
Resumo:
The cost of tendering in the construction industry is widely suspected to be excessive, but there is little robust empirical evidence to demonstrate this. It also seems that innovative working practices may reduce the costs of undertaking construction projects and the consequent improvement in relationships should increase overall value for money. The aim of this proposed research project is to develop mechanisms for measuring the true costs of tendering based upon extensive in-house data collection undertaken in a range of different construction firms. The output from this research will enable all participants in the construction process to make better decisions about how to select members of the team and identify the price and scope of their obligations.
Resumo:
We present a procedure for estimating two quantities defining the spatial externality in discrete-choice commonly referred to as 'the neighbourhood effect'. One quantity, the propensity for neighbours to make the same decision, reflects traditional preoccupations; the other quantity, the magnitude of the neighbourhood itself, is novel. Because both quantities have fundamental bearing on the magnitude of the spatial externality, it is desirable to have a robust algorithm for their estimation. Using recent advances in Bayesian estimation and model comparison, we devise such an algorithm and illustrate its application to a sample of northern-Filipino smallholders. We determine that a significant, positive, neighbourhood effect exists; that, among the 12 geographical units comprising the sample, the neighbourhood spans a three-unit radius; and that policy prescriptions are significantly altered when calculations account for the spatial externality.
Resumo:
The concept of an organism's niche is central to ecological theory, but an operational definition is needed that allows both its experimental delineation and interpretation of field distributions of the species. Here we use population growth rate (hereafter, pgr) to de. ne the niche as the set of points in niche space where pgr. 0. If there are just two axes to the niche space, their relationship to pgr can be pictured as a contour map in which pgr varies along the axes in the same way that the height of land above sea level varies with latitude and longitude. In laboratory experiments we measured the pgr of Daphnia magna over a grid of values of pH and Ca2+, and so defined its "laboratory niche'' in pH-Ca2+ space. The position of the laboratory niche boundary suggests that population persistence is only possible above 0.5 mg Ca2+/L and between pH 5.75 and pH 9, though more Ca2+ is needed at lower pH values. To see how well the measured niche predicts the field distribution of D. magna, we examined relevant field data from 422 sites in England and Wales. Of the 58 colonized water bodies, 56 lay within the laboratory niche. Very few of the sites near the niche boundary were colonized, probably because pgr there is so low that populations are vulnerable to extinction by other factors. Our study shows how the niche can be quantified and used to predict field distributions successfully.