30 resultados para Value System
Resumo:
This paper reviews the impact of the global financial crisis on financial system reform in China. Scholars and practitioners have critically questioned the efficiencies of the Anglo- American principal-agent model of corporate governance which promotes shareholder-value maximisation. Should China continue to follow the U.K.-U.S. path in relation to financial reform? This conceptual paper provides an insightful review of the corporate governance literature, regulatory reports and news articles from the financial press. After examining the fundamental limitations of the laissez-faire philosophy that underpins the neo-liberal model of capitalism, the paper considers the risks in opening up China’s financial markets and relaxing monetary and fiscal policies. The paper outlines a critique of shareholder-capitalism in relation to the German team-production model of corporate governance, promoting a “social market economy” styled capitalism. Through such analysis, the paper explores numerous implications for China to consider in terms of developing a new and sustainable corporate governance model. China needs to follow its own financial reform through understanding its particular economy. The global financial crisis might help China rethink the nature of corporate governance, identify its weakness and assess the current reform agenda.
Resumo:
Purpose – The purpose of this paper is to analyse the likelihood of adoption of a recently designed Welfare Assessment System in agri-food supply chains and the factors affecting the adoption decision. The application is carried out for pig and poultry chains. Design/methodology/approach – This research consisted of two main components: interviews with retailers in pig and poultry supply chains in eight different EU countries to explore their perceptions towards the adoption possibilities of the welfare assessment system; and a conjoint analysis designed to evaluate the perceived adoption likelihood of the assessment system by different Standards Formulating Organisations (SFOs). Findings – Stakeholders were found to be especially concerned about the costs of implementation of the system and how it could, or should, be merged with existing assurance schemes. Another conclusion of the study is that the presence of a strong third independent party supporting the implementation of the welfare assessment system would be the most important influence on the decision whether, or not, to adopt it. Originality/value – This research evaluates the adoption possibilities of a novel Welfare Assessment System and presents the views of different supply chain stakeholders on an adoption of such a system. The main factors affecting the adoption decision are identified and analysed. Contrary to expectations, the costs of adoption of a new welfare assessment system were not considered to be the most important factor affecting the decision of supply chain stakeholders about the adoption of this new welfare system.
Resumo:
We consider the two-point boundary value problem for stiff systems of ordinary differential equations. For systems that can be transformed to essentially diagonally dominant form with appropriate smoothness conditions, a priori estimates are obtained. Problems with turning points can be treated with this theory, and we discuss this in detail. We give robust difference approximations and present error estimates for these schemes. In particular we give a detailed description of how to transform a general system to essentially diagonally dominant form and then stretch the independent variable so that the system will satisfy the correct smoothness conditions. Numerical examples are presented for both linear and nonlinear problems.
Resumo:
Due to the requirement to demonstrate financial feasibility of policy proposals and scheme-specific planning obligations, development viability and development appraisal have become core themes in the English planning system. The objective of this paper is to evaluate the application of development appraisal in practice. The paper reviews the literature and the models available to assess the viability of development and analyses a sample 19 development viability appraisals to identify practice. The paper concludes that the practice of development appraisal deviates significantly from the tenets of capital budgeting theory. In particular, in addition to a propensity to oversimplify the timing of income and expenditure, the way in which debt, developer’s return and value and cost change are handled in practice illustrates a major gap between mainstream capital budgeting theory and development appraisal in practice.
Resumo:
This paper investigates the application and use of development viability models in the formation of planning policies in the UK. Particular attention is paid to three key areas; the assumed development scheme in development viability models, the use of forecasts and the debate concerning Threshold Land Value. The empirical section reports on the results of an interview survey involving the main producers of development viability models and appraisals. It is concluded that, although development viability models have intrinsic limitations associated with model composition and input uncertainties, the most significant limitations are related to the ways that they have been adapted for use in the planning system. In addition, it is suggested that the contested nature of Threshold Land Value is an example of calculative practices providing a façade of technocratic rationality in the planning system.
Resumo:
Although ensemble prediction systems (EPS) are increasingly promoted as the scientific state-of-the-art for operational flood forecasting, the communication, perception, and use of the resulting alerts have received much less attention. Using a variety of qualitative research methods, including direct user feedback at training workshops, participant observation during site visits to 25 forecasting centres across Europe, and in-depth interviews with 69 forecasters, civil protection officials, and policy makers involved in operational flood risk management in 17 European countries, this article discusses the perception, communication, and use of European Flood Alert System (EFAS) alerts in operational flood management. In particular, this article describes how the design of EFAS alerts has evolved in response to user feedback and desires for a hydrographic-like way of visualizing EFAS outputs. It also documents a variety of forecaster perceptions about the value and skill of EFAS forecasts and the best way of using them to inform operational decision making. EFAS flood alerts were generally welcomed by flood forecasters as a sort of ‘pre-alert’ to spur greater internal vigilance. In most cases, however, they did not lead, by themselves, to further preparatory action or to earlier warnings to the public or emergency services. Their hesitancy to act in response to medium-term, probabilistic alerts highlights some wider institutional obstacles to the hopes in the research community that EPS will be readily embraced by operational forecasters and lead to immediate improvements in flood incident management. The EFAS experience offers lessons for other hydrological services seeking to implement EPS operationally for flood forecasting and warning. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
The Normal Quantile Transform (NQT) has been used in many hydrological and meteorological applications in order to make the Cumulated Distribution Function (CDF) of the observed, simulated and forecast river discharge, water level or precipitation data Gaussian. It is also the heart of the meta-Gaussian model for assessing the total predictive uncertainty of the Hydrological Uncertainty Processor (HUP) developed by Krzysztofowicz. In the field of geo-statistics this transformation is better known as the Normal-Score Transform. In this paper some possible problems caused by small sample sizes when applying the NQT in flood forecasting systems will be discussed and a novel way to solve the problem will be outlined by combining extreme value analysis and non-parametric regression methods. The method will be illustrated by examples of hydrological stream-flow forecasts.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover, composition and 5 height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, 10 and are compared to scores based on the temporal or spatial mean value of the observations and a “random” model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), and the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global 15 vegetation models (DGVMs). SDBM reproduces observed CO2 seasonal cycles, but its simulation of independent measurements of net primary production (NPP) is too high. The two DGVMs show little difference for most benchmarks (including the interannual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified 20 several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change 25 impacts and feedbacks.
Resumo:
Purpose – The aim of this paper is to present a conceptual valuation framework to allow telecare service stakeholders to assess telecare devices in the home in terms of their social, psychological and practical effects. The framework enables telecare service operators to more effectively engage with the social and psychological issues resulting from telecare technology deployment in the home and to design and develop appropriate responses as a result. Design/methodology/approach – The paper provides a contextual background for the need for sociologically pitched tools that engage with the social and cultural feelings of telecare service users before presenting the valuation framework and how it could be used. Findings – A conceptual valuation framework is presented for potential development/use. Research limitations/implications – The valuation framework has yet to be extensively tested or verified. Practical implications – The valuation framework needs to be tested and deployed by a telecare service operator but the core messages of the paper are valid and interesting for readership. Social implications – In addressing the social and cultural perspectives of telecare service stakeholders, the paper makes a link between the technologies in the home, the feelings and orientations of service users (e.g. residents, emergency services, wardens, etc.) and the telecare service operator. Originality/value – The paper is an original contribution to the field as it details how the sociological orientations of telecare technology service users should be valued and addressed by service operators. It has a value through the conceptual arguments made and through valuation framework presented.
Resumo:
We present a benchmark system for global vegetation models. This system provides a quantitative evaluation of multiple simulated vegetation properties, including primary production; seasonal net ecosystem production; vegetation cover; composition and height; fire regime; and runoff. The benchmarks are derived from remotely sensed gridded datasets and site-based observations. The datasets allow comparisons of annual average conditions and seasonal and inter-annual variability, and they allow the impact of spatial and temporal biases in means and variability to be assessed separately. Specifically designed metrics quantify model performance for each process, and are compared to scores based on the temporal or spatial mean value of the observations and a "random" model produced by bootstrap resampling of the observations. The benchmark system is applied to three models: a simple light-use efficiency and water-balance model (the Simple Diagnostic Biosphere Model: SDBM), the Lund-Potsdam-Jena (LPJ) and Land Processes and eXchanges (LPX) dynamic global vegetation models (DGVMs). In general, the SDBM performs better than either of the DGVMs. It reproduces independent measurements of net primary production (NPP) but underestimates the amplitude of the observed CO2 seasonal cycle. The two DGVMs show little difference for most benchmarks (including the inter-annual variability in the growth rate and seasonal cycle of atmospheric CO2), but LPX represents burnt fraction demonstrably more accurately. Benchmarking also identified several weaknesses common to both DGVMs. The benchmarking system provides a quantitative approach for evaluating how adequately processes are represented in a model, identifying errors and biases, tracking improvements in performance through model development, and discriminating among models. Adoption of such a system would do much to improve confidence in terrestrial model predictions of climate change impacts and feedbacks.
Resumo:
Simple predator–prey models with a prey-dependent functional response predict that enrichment (increased carrying capacity) destabilizes community dynamics: this is the ‘paradox of enrichment’. However, the energy value of prey is very important in this context. The intraspecific chemical composition of prey species determines its energy value as a food for the potential predator. Theoretical and experimental studies establish that variable chemical composition of prey affects the predator–prey dynamics. Recently, experimental and theoretical approaches have been made to incorporate explicitly the stoichiometric heterogeneity of simple predator–prey systems. Following the results of the previous experimental and theoretical advances, in this article we propose a simple phenomenological formulation of the variation of energy value at increased level of carrying capacity. Results of our study demonstrate that coupling the parameters representing the phenomenological energy value and carrying capacity in a realistic way, may avoid destabilization of community dynamics following enrichment. Additionally, under such coupling the producer–grazer system persists for only an intermediate zone of production—a result consistent with recent studies. We suggest that, while addressing the issue of enrichment in a general predator–prey model, the phenomenological relationship that we propose here might be applicable to avoid Rosenzweig’s paradox.
Resumo:
Over the last decade the English planning system has placed greater emphasis on the financial viability of development. ‘Calculative’ practices have been used to quantify and capture land value uplifts. Development viability appraisal (DVA) has become a key part of the evidence base used in planning decision-making and informs both ‘site-specific’ negotiations about the level of land value capture for individual schemes and ‘area-wide’ planning policy formation. This paper investigates how implementation of DVA is governed in planning policy formation. It is argued that the increased use of DVA raises important questions about how planning decisions are made and operationalised, not least because DVA is often poorly understood by some key stakeholders. The paper uses the concept of governance to thematically analyse semi-structured interviews conducted with the producers of DVAs and considers key procedural issues including (in)consistencies in appraisal practices, levels of stakeholder consultation and the potential for client and producer bias. Whilst stakeholder consultation is shown to be integral to the appraisal process in order to improve the quality of the appraisals and to legitimise the outputs, participation is restricted to industry experts and excludes some interest groups, including local communities. It is concluded that, largely because of its recent adoption and knowledge asymmetries between local planning authorities and appraisers, DVA is a weakly governed process characterised by emerging and contested guidance and is therefore ‘up for grabs’.
Resumo:
This paper investigates the value of a generic storage system within two GB market mechanisms and one ancillary service provision: the wholesale power market, the Balancing Mechanism and Firm Frequency Response (FFR). Three models are evaluated under perfect foresight and fixed horizon which is subsequently extended to explore the impact of a longer foresight on market revenues. The results show that comparatively, the balancing mechanism represents the highest source of potential revenues followed by the wholesale power market and Firm Frequency Response respectively. Longer horizons show diminishing returns, with the 1 day horizon providing the vast majority of total revenues. However storage power capacity utilization benefits from such long horizons. These results could imply that short horizons are very effective in capturing revenues in both the wholesale market and balancing mechanism whereas sizing of a storage system should take into consideration horizon foresight and accuracy for greater benefit.
Resumo:
There is a growing research concern on how service ecosystems form and interact. This research thus aims to explore the service ecosystem formation and interaction as well as its underlying nature of value co-creation. This work develops an initial conceptual framework for assessing service system interactions that includes the various stages of value co-creation within ecosystem context. How the conceptual framework will further be developed and future plan are also presented.
Resumo:
The predictability of high impact weather events on multiple time scales is a crucial issue both in scientific and socio-economic terms. In this study, a statistical-dynamical downscaling (SDD) approach is applied to an ensemble of decadal hindcasts obtained with the Max-Planck-Institute Earth System Model (MPI-ESM) to estimate the decadal predictability of peak wind speeds (as a proxy for gusts) over Europe. Yearly initialized decadal ensemble simulations with ten members are investigated for the period 1979–2005. The SDD approach is trained with COSMO-CLM regional climate model simulations and ERA-Interim reanalysis data and applied to the MPI-ESM hindcasts. The simulations for the period 1990–1993, which was characterized by several windstorm clusters, are analyzed in detail. The anomalies of the 95 % peak wind quantile of the MPI-ESM hindcasts are in line with the positive anomalies in reanalysis data for this period. To evaluate both the skill of the decadal predictability system and the added value of the downscaling approach, quantile verification skill scores are calculated for both the MPI-ESM large-scale wind speeds and the SDD simulated regional peak winds. Skill scores are predominantly positive for the decadal predictability system, with the highest values for short lead times and for (peak) wind speeds equal or above the 75 % quantile. This provides evidence that the analyzed hindcasts and the downscaling technique are suitable for estimating wind and peak wind speeds over Central Europe on decadal time scales. The skill scores for SDD simulated peak winds are slightly lower than those for large-scale wind speeds. This behavior can be largely attributed to the fact that peak winds are a proxy for gusts, and thus have a higher variability than wind speeds. The introduced cost-efficient downscaling technique has the advantage of estimating not only wind speeds but also estimates peak winds (a proxy for gusts) and can be easily applied to large ensemble datasets like operational decadal prediction systems.