891 resultados para individual zones of optimal functioning model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus malachurus intermedius), a critically endangered Australian bird. Using diserete-time Markov,chains to describe the dynamics of a metapopulation and stochastic dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction dynamics have been integrated with population dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combination of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation, management decisions depend greatly on the current state of the metapopulation,. and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from stochastic dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of management actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Management systems standards (MSSs) have developed in an unprecedented manner in the last few years. These MSS cover a wide array of different disciplines, aims and activities of organisations. Also, organisations are populated with an enormous diversity of independent management systems (MSs). An integrated management system (IMS) tends to integrate some or all components of the business. Maximising their integration in one coherent and efficient MS is increasingly a strategic priority and constitutes an opportunity for businesses to be more competitive and consequently, promote its sustainable success. Those organisations that are quicker and more efficient in their integration and continuous improvement will have a competitive advantage in obtaining sustainable value in our global and competitive business world. Several scholars have proposed various theoretical approaches regarding the integration of management sub-systems, leading to the conclusion that there is no common practice for all organisations as they encompass different characteristics. One other author shows that several tangible and intangible gains for organisations, as well as to their internal and external stakeholders, are achieved with the integration of the individual standardised MSs. The purpose of this work was to conceive a model, Flexible, Integrator and Lean for IMSs, according to ISO 9001 for quality; ISO 14001 for environment and OHSAS 18001 for occupational health and safety (IMS–QES), that can be adapted and progressively assimilate other MSs, such as, SA 8000/ISO 26000 for social accountability, ISO 31000 for risk management and ISO/IEC 27001 for information security management, among others. The IMS–QES model was designed in the real environment of an industrial Portuguese small and medium enterprise, that over the years has been adopting, gradually, in whole or in part, individual MSSs. The developed model is based on a preliminary investigation conducted through a questionnaire. The strategy and research methods have taken into consideration the case study. Among the main findings of the survey we highlight: the creation of added value for the business through the elimination of several organisational wastes; the integrated management of the sustainability components; the elimination of conflicts between independent MS; dialogue with the main stakeholders and commitment to their ongoing satisfaction and increased contribution to the company’s competitiveness; and greater valorisation and motivation of employees as a result of the expansion of their skill base, actions and responsibilities, with their consequent empowerment. A set of key performance indicators (KPIs) constitute the support, in a perspective of business excellence, to the follow up of the organisation’s progress towards the vision and achievement of the defined objectives in the context of each component of the IMS model. The conceived model had many phases and the one presented in this work is the last required for the integration of quality, environment, safety and others individual standardised MSs. Globally, the investigation results, by themselves, justified and prioritised the conception of an IMS–QES model, to be implemented at the company where the investigation was conducted, but also a generic model of an IMS, which may be more flexible, integrator and lean as possible, potentiating the efficiency, added value both in the present and, fundamentally, for future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ISME, Thessaloniki, 2012

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Critical Issues in Environmental Taxation: International and Comparative Perspectives: Volume VI, 699-715

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high penetration of distributed energy resources (DER) in distribution networks and the competitiveenvironment of electricity markets impose the use of new approaches in several domains. The networkcost allocation, traditionally used in transmission networks, should be adapted and used in the distribu-tion networks considering the specifications of the connected resources. The main goal is to develop afairer methodology trying to distribute the distribution network use costs to all players which are usingthe network in each period. In this paper, a model considering different type of costs (fixed, losses, andcongestion costs) is proposed comprising the use of a large set of DER, namely distributed generation(DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehi-cles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). Theproposed model includes three distinct phases of operation. The first phase of the model consists in aneconomic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen’s andBialek’s tracing algorithms are used and compared to evaluate the impact of each resource in the net-work. Finally, the MW-mile method is used in the third phase of the proposed model. A distributionnetwork of 33 buses with large penetration of DER is used to illustrate the application of the proposedmodel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The future of health care delivery is becoming more citizen-centred, as today’s user is more active, better informed and more demanding. The European Commission is promoting online health services and, therefore, member states will need to boost deployment and use of online services. This makes e-health adoption an important field to be studied and understood. This study applied the extended unified theory of acceptance and usage technology (UTAUT2) to explain patients’ individual adoption of e-health. An online questionnaire was administrated Portugal using mostly the same instrument used in UTAUT2 adapted to e-health context. We collected 386 valid answers. Performance expectancy, effort expectancy, social influence, and habit had the most significant explanatory power over behavioural intention and habit and behavioural intention over technology use. The model explained 52% of the variance in behavioural intention and 32% of the variance in technology use. Our research helps to understand the desired technology characteristics of ehealth. By testing an information technology acceptance model, we are able to determine what is more valued by patients when it comes to deciding whether to adopt e-health systems or not.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La contaminación ambiental por metales pesados como el cromo y por compuestos orgánicos como los fenoles es un grave problema a nivel mundial debido a su toxicidad y a sus efectos adversos sobre los seres humanos, la flora y la fauna, tanto por su acumulación en la cadena alimentaria como por su continua persistencia en el medio ambiente. En un estudio preliminar, efectuado por nuestro laboratorio, se han detectado elevados niveles de estos contaminantes en sedimentos y efluentes en zonas industriales del sur de la provincia de Córdoba, lo cual plantea la necesidad de removerlos. Entre las tecnologías disponibles, la biorremediación, que se basa en el uso de sistemas biológicos, como los microorganismos, para la detoxificación y la degradación de contaminantes, se presenta como una alternativa probablemente más efectiva y de menor costo que las técnicas convencionales. Sin embargo, la aplicación de esta tecnología depende en gran parte de la influencia de las características particulares y específicas de la zona a remediar. En consecuencia, en primer lugar se caracterizará la zona de muestreo y se aislarán e identificarán microorganismos nativos de la región, tolerantes a cromo y fenol, a partir de muestras de suelo, agua y sedimentos, ya que podrían constituir una adecuada herramienta biotecnológica, mejor adaptada al sitio a tratar. Posteriormente se estudiará la biorremediación de Cr y fenol utilizando dichos microorganismos, analizando su capacidad para biotransformar, bioacumular o bioadsorber a estos contaminantes, y se determinarán las condiciones óptimas para el tratamiento. Se analizarán los posibles mecanismos fisiológicos, bioquímicos y moleculares involucrados en la remediación, que constituye una etapa crucial para el diseño de una estrategia adecuada y eficiente. Finalmente, se aplicará esta tecnología a escala reactor, como una primera aproximación al tratamiento a mayor escala. De esta manera se espera reducir los niveles de estos contaminantes y así minimizar el impacto ambiental que ellos producen en suelos y acuíferos. A futuro, la utilización de los microorganismos seleccionados, de manera individual o formando consorcios, para el tratamiento de efluentes industriales previa liberación al medio ambiente, o su uso en bioaumento, constituirían posibles alternativas de aplicación. Los principales impactos científico-tecnológicos del proyecto serán: (a) la generación de una nueva tecnología biológica de decontaminación de cromo y fenol, intentando presentar soluciones frente a una problemática ambiental que afecta a nuestra región, pero que además es común a la mayoría de los países, (b) la formación de nuevos recursos humanos en el área y (c) el trabajo en colaboración con otros grupos de investigación que se destacan en el área de biotecnología ambiental. Environmental pollution produced by heavy metals, such as chromium and organic compounds like phenolics is a serious global problem due to their toxicity, their adverse effects on human life, plants and animals, their accumulation in the food chains and also by their persistance in the environment. In a previous study performed in our laboratory, high levels of these pollutants were detected in sediments and effluents from industrial zones of the south of Cordoba Province, which determine the need to remove them. Among various technologies, bioremediation which is based on the use of biological systems, such as microorganisms, to detoxify and to degrade contaminants, is probably the most effective alternative, and it is less expensive than other conventional technologies. However, the application of this technology depends on the influence of the particular and specific characteristics of the zone to be remediate. As a consecuence, at the first time, the zone of sampling will be characterized and then, native microorganisms, tolerant to chromium and phenol, will be isolated from soils, water and sediments and identificated. These microorganisms would be an adequate biotechnological tool, more adapted to the conditions of the site to be remediate than other ones. Then, the ability of these selected microorganisms to biotransform, bioaccumulate or biosorbe chromium and phenol will be studied and the optimal conditions for the treatment will be determined. The possible physiological, biochemical and molecular mechanisms involved in bioremediation will be also analized, because this is a crucial step in the design of an adequate and efficient remediation strategy. Finally, this technology will be applied in a reactor, as an approximation to the treatment at a major scale. A reduction in the levels of these pollutants will be expected, to minimize their environmental impact on soils and aquifers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Untreated wastewater being directly discharged into rivers is a very harmful environmental hazard that needs to be tackled urgently in many countries. In order to safeguard the river ecosystem and reduce water pollution, it is important to have an effluent charge policy that promotes the investment of wastewater treatment technology by domestic firms. This paper considers the strategic interaction between the government and the domestic firms regarding the investment in the wastewater treatment technology and the design of optimal e­ffluent charge policy that should be implemented. In this model, the higher is the proportion of non-investing firms, the higher would be the probability of having to incur an e­ffluent charge and the higher would be that charge. On one hand the government needs to impose a sufficiently strict policy to ensure that firms have strong incentive to invest. On the other hand, it cannot be too strict that it drives out firms which cannot afford to invest in such expensive technology. The paper analyses the factors that affect the probability of investment in this technology. It also explains the difficulty of imposing a strict environment policy in countries that have too many small firms which cannot afford to invest unless subsidised.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper analyses optimal income taxes over the business cycle under a balanced-budget restriction, for low, middle and high income households. A model incorporating capital-skill complementarity in production and differential access to capital and labour markets is developed to capture the cyclical characteristics of the US economy, as well as the empirical observations on wage (skill premium) and wealth inequality. We .nd that the tax rate for high income agents is optimally the least volatile and the tax rate for low income agents the least countercyclical. In contrast, the path of optimal taxes for the middle income group is found to be very volatile and counter-cyclical. We further find that the optimal response to output-enhancing capital equipment technology and spending cuts is to increase the progressivity of income taxes. Finally, in response to positive TFP shocks, taxation becomes more progressive after about two years.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this analysis, we examine the relationship between an individual's decision to volunteer and the average level of volunteering in the community where the individual resides. Our theoretical model is based on a coordination game , in which volunteering by others is informative regarding the benefit from volunteering. We demonstrate that the interaction between this information and one's private information makes it more likely that he or she will volunteer, given a higher level of contributions by his or her peers. We complement this theoretical work with an empirical analysis using Census 2000 Summary File 3 and Current Population Survey (CPS) 2004-2007 September supplement file data. We control for various individual and community characteristics, and employ robustness checks to verify the results of the baseline analysis. We additionally use an innovative instrumental variables strategy to account for reflection bias and endogeneity caused by selective sorting by individuals into neighborhoods, which allows us to argue for a causal interpretation. The empirical results in the baseline, as well as all robustness analyses, verify the main result of our theoretical model, and we employ a more general structure to further strengthen our results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this analysis, we examine the relationship between an individual’s decision to volunteer and the average level of volunteering in the community where the individual resides. Our theoretical model is based on a coordination game , in which volunteering by others is informative regarding the benefit from volunteering. We demonstrate that the interaction between this information and one’s private information makes it more likely that he or she will volunteer, given a higher level of contributions by his or her peers. We complement this theoretical work with an empirical analysis using Census 2000 Summary File 3 and Current Population Survey (CPS) 2004-2007 September supplement file data. We control for various individual and community characteristics, and employ robustness checks to verify the results of the baseline analysis. We additionally use an innovative instrumental variables strategy to account for reflection bias and endogeneity caused by selective sorting by individuals into neighbourhoods, which allows us to argue for a causal interpretation. The empirical results in the baseline, as well as all robustness analyses, verify the main result of our theoretical model, and we employ a more general structure to further strengthen our results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Excessive exposure to solar ultraviolet (UV) is the main cause of skin cancer. Specific prevention should be further developed to target overexposed or highly vulnerable populations. A better characterisation of anatomical UV exposure patterns is however needed for specific prevention. To develop a regression model for predicting the UV exposure ratio (ER, ratio between the anatomical dose and the corresponding ground level dose) for each body site without requiring individual measurements. A 3D numeric model (SimUVEx) was used to compute ER for various body sites and postures. A multiple fractional polynomial regression analysis was performed to identify predictors of ER. The regression model used simulation data and its performance was tested on an independent data set. Two input variables were sufficient to explain ER: the cosine of the maximal daily solar zenith angle and the fraction of the sky visible from the body site. The regression model was in good agreement with the simulated data ER (R(2)=0.988). Relative errors up to +20% and -10% were found in daily doses predictions, whereas an average relative error of only 2.4% (-0.03% to 5.4%) was found in yearly dose predictions. The regression model predicts accurately ER and UV doses on the basis of readily available data such as global UV erythemal irradiance measured at ground surface stations or inferred from satellite information. It renders the development of exposure data on a wide temporal and geographical scale possible and opens broad perspectives for epidemiological studies and skin cancer prevention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cognitive impairment has emerged as a major driver of disability in old age, with profound effects on individual well-being and decision making at older ages. In the light of policies aimed at postponing retirement ages, an important question is whether continued labour supply helps to maintain high levels of cognition at older ages. We use data of older men from the US Health and Retirement Study to estimate the effect of continued labour market participation at older ages on later-life cognition. As retirement itself is likely to depend on cognitive functioning and may thus be endogenous, we use offers of early retirement windows as instruments for retirement in econometric models for later-life cognitive functioning. These offers of early retirement are legally required to be nondiscriminatory and thus, inter alia, unrelated to cognitive functioning. At the same time, these offers of early retirement options are significant predictors of retirement. Although the simple ordinary least squares estimates show a negative relationship between retirement duration and various measures of cognitive functioning, instrumental variable estimates suggest that these associations may not be causal effects. Specifically, we find no clear relationship between retirement duration and later-life cognition for white-collar workers and, if anything, a positive relationship for blue-collar workers. Copyright © 2011 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The detailed geological mapping and structural study of a complete transect across the northwestern Himalaya allow to describe the tectonic evolution of the north Indian continental margin during the Tethys ocean opening and the Himalayan Orogeny. The Late Paleozoic Tethys rifting is associated with several tectonomagmatic events. In Upper Lahul and SE Zanskar, this extensional phase is recorded by Lower Carboniferous synsedimentary transtensional faults, a Lower Permian stratigraphic unconformity, a Lower Permian granitic intrusion and middle Permian basaltic extrusions (Panjal Traps). In eastern Ladakh, a Permian listric normal fault is also related to this phase. The scarcity of synsedimentary faults and the gradual increase of the Permian syn-rift sediment thickness towards the NE suggest a flexural type margin. The collision of India and Asia is characterized by a succession of contrasting orogenic phases. South of the Suture Zone, the initiation of the SW vergent Nyimaling-Tsarap Nappe corresponds to an early phase of continental underthrusting. To the S, in Lahul, an opposite underthrusting within the Indian plate is recorded by the NE vergent Tandi Syncline. This structure is associated with the newly defined Shikar Beh Nappe, now partly eroded, which is responsible for the high grade (amphibolite facies) regional metamorphism of South Lahul. The main thrusting of the Nyimaling-Tsarap Nappe followed the formation of the Shikar Beh Nappe. The Nyimaling-Tsarap Nappe developed by ductile shear of the upper part of the subducted Indian continental margin and is responsible for the progressive regional metamorphism of SE Zanskar, reaching amphibolite facies below the frontal part of the nappe, near Sarchu. In Upper Lahul, the frontal parts of the Nyimaling-Tsarap and Shikar Beh nappes are separated by a zone of low grade metamorphic rocks (pumpellyite-actinolite facies to lower greenschist facies). At high structural level, the Nyimaling-Tsarap Nappe is characterized by imbricate structures, which grade into a large ductile shear zone with depth. The related crustal shortening is about 87 km. The root zone and the frontal part of this nappe have been subsequently affected by two zones of dextral transpression and underthrusting: the Nyimaling Shear Zone and the Sarchu Shear Zone. These shear zones are interpreted as consequences of the counterclockwise rotation of the continental underthrusting direction of India relative to Asia, which occurred some 45 and 36 Ma ago, according to plate tectonic models. Later, a phase of NE vergent `'backfolding'' developed on these two zones of dextral transpression, creating isoclinal folds in SE Zanskar and more open folds in the Nyimaling Dome and in the Indus Molasse sediments. During a late stage of the Himalayan Orogeny, the frontal part of the Nyimaling-Tsarap Nappe underwent an extension of about 15 km. This phase is represented by two types of structures, responsible for the tectonic unroofing of the amphibolite facies rocks of the Sarchu area: the Sarchu high angle Normal Fault, cutting a first set of low angle normal faults, which have been created by reactivation of older thrust planes related to the Nyimaling-Tsarap Nappe.