987 resultados para Tax optimality index
Resumo:
The Agri-Environmental Footprint project is developing a common methodology for assessing the environmental impact of European agri-environment schemes. The Agri-Environmental Footprint Index (AFI) has been constructed as a customisable approach. It is a farm-level index that aggregates the measurement of agri-environmental indicators. Farm-level impact scores can be aggregated at a regional level to track temporal change and/or to provide comparisons of the success (or otherwise) of an agri-environment scheme.
Resumo:
An aggregated farm-level index, the Agri-environmental Footprint Index (AFI), based on multiple criteria methods and representing a harmonised approach to evaluation of EU agri-environmental schemes is described. The index uses a common framework for the design and evaluation of policy that can be customised to locally relevant agri-environmental issues and circumstances. Evaluation can be strictly policy-focused, or broader and more holistic in that context-relevant assessment criteria that are not necessarily considered in the evaluated policy can nevertheless be incorporated. The Index structure is flexible, and can respond to diverse local needs. The process of Index construction is interactive, engaging farmers and other relevant stakeholders in a transparent decision-making process that can ensure acceptance of the outcome, help to forge an improved understanding of local agri-environmental priorities and potentially increase awareness of the critical role of farmers in environmental management. The structure of the AFI facilitates post-evaluation analysis of relative performance in different dimensions of the agri-environment, permitting identification of current strengths and weaknesses, and enabling future improvement in policy design. Quantification of the environmental impact of agriculture beyond the stated aims of policy using an 'unweighted' form of the AFI has potential as the basis of an ongoing system of environmental audit within a specified agricultural context. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The paper considers meta-analysis of diagnostic studies that use a continuous Score for classification of study participants into healthy, or diseased groups. Classification is often done on the basis of a threshold or cut-off value, which might vary between Studies. Consequently, conventional meta-analysis methodology focusing solely on separate analysis of sensitivity and specificity might he confounded by a potentially unknown variation of the cut-off Value. To cope with this phenomena it is suggested to use, instead an overall estimate of the misclassification error previously suggested and used as Youden's index and; furthermore, it is argued that this index is less prone to between-study variation of cut-off values. A simple Mantel-Haenszel estimator as a summary measure of the overall misclassification error is suggested, which adjusts for a potential study effect. The measure of the misclassification error based on Youden's index is advantageous in that it easily allows an extension to a likelihood approach, which is then able to cope with unobserved heterogeneity via a nonparametric mixture model. All methods are illustrated at hand of an example on a diagnostic meta-analysis on duplex doppler ultrasound, with angiography as the standard for stroke prevention.
Resumo:
Individuals are typically co-infected by a diverse community of microparasites (e.g. viruses or protozoa) and macroparasites (e.g. helminths). Vertebrates respond to these parasites differently, typically mounting T helper type 1 (Th1) responses against microparasites and Th2 responses against macroparasites. These two responses may be antagonistic such that hosts face a 'decision' of how to allocate potentially limiting resources. Such decisions at the individual host level will influence parasite abundance at the population level which, in turn, will feed back upon the individual level. We take a first step towards a complete theoretical framework by placing an analysis of optimal immune responses under microparasite-macroparasite co-infection within an epidemiological framework. We show that the optimal immune allocation is quantitatively sensitive to the shape of the trade-off curve and qualitatively sensitive to life-history traits of the host, microparasite and macroparasite. This model represents an important first step in placing optimality models of the immune response to co-infection into an epidemiological framework. Ultimately, however, a more complete framework is needed to bring together the optimal strategy at the individual level and the population-level consequences of those responses, before we can truly understand the evolution of host immune responses under parasite co-infection.
Resumo:
We have developed a model that allows players in the building and construction sector and the energy policy makers on energy strategies to be able to perceive the interest of investors in the kingdom of Bahrain in conducting Building Integrated Photovoltaic (BIPV) or Building integrated wind turbines (BIWT) projects, i.e. a partial sustainable or green buildings. The model allows the calculation of the Sustainable building index (SBI), which ranges from 0.1 (lowest) to 1.0 (highest); the higher figure the more chance for launching BIPV or BIWT. This model was tested in Bahrain and the calculated SBI was found 0.47. This means that an extensive effort must be made through policies on renewable energy, renewable energy education, and incentives to BIPV and BIWT projects, environmental awareness and promotion to clean and sustainable energy for building and construction projects. Our model can be used internationally to create a "Global SBI" database. The Sustainable building and construction initiative (SBCI), United Nation, can take the task for establishing such task using this model.
Resumo:
This paper deals with the energy consumption and the evaluation of the performance of air supply systems for a ventilated room involving high- and low-level supplies. The energy performance assessment is based on the airflow rate, which is related to the fan power consumption by achieving the same environmental quality performance for each case. Four different ventilation systems are considered: wall displacement ventilation, confluent jets ventilation, impinging jet ventilation and a high level mixing ventilation system. The ventilation performance of these systems will be examined by means of achieving the same Air Distribution Index (ADI) for different cases. The widely used high-level supplies require much more fan power than those for low-level supplies for achieving the same value of ADI. In addition, the supply velocity, hence the supply dynamic pressure, for a high-level supply is much larger than for low-level supplies. This further increases the power consumption for high-level supply systems. The paper considers these factors and attempts to provide some guidelines on the difference in the energy consumption associated with high and low level air supply systems. This will be useful information for designers and to the authors' knowledge there is a lack of information available in the literature on this area of room air distribution. The energy performance of the above-mentioned ventilation systems has been evaluated on the basis of the fan power consumed which is related to the airflow rate required to provide equivalent indoor environment. The Air Distribution Index (ADI) is used to evaluate the indoor environment produced in the room by the ventilation strategy being used. The results reveal that mixing ventilation requires the highest fan power and the confluent jets ventilation needs the lowest fan power in order to achieve nearly the same value of ADI.
Resumo:
One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options.
Resumo:
Random number generation (RNG) is a functionally complex process that is highly controlled and therefore dependent on Baddeley's central executive. This study addresses this issue by investigating whether key predictions from this framework are compatible with empirical data. In Experiment 1, the effect of increasing task demands by increasing the rate of the paced generation was comprehensively examined. As expected, faster rates affected performance negatively because central resources were increasingly depleted. Next, the effects of participants' exposure were manipulated in Experiment 2 by providing increasing amounts of practice on the task. There was no improvement over 10 practice trials, suggesting that the high level of strategic control required by the task was constant and not amenable to any automatization gain with repeated exposure. Together, the results demonstrate that RNG performance is a highly controlled and demanding process sensitive to additional demands on central resources (Experiment 1) and is unaffected by repeated performance or practice (Experiment 2). These features render the easily administered RNG task an ideal and robust index of executive function that is highly suitable for repeated clinical use.
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.
Resumo:
A construction algorithm for multioutput radial basis function (RBF) network modelling is introduced by combining a locally regularised orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximised model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious RBF network model with excellent generalisation performance. The D-optimality design criterion enhances the model efficiency and robustness. A further advantage of the combined approach is that the user only needs to specify a weighting for the D-optimality cost in the combined RBF model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.
Resumo:
The note proposes an efficient nonlinear identification algorithm by combining a locally regularized orthogonal least squares (LROLS) model selection with a D-optimality experimental design. The proposed algorithm aims to achieve maximized model robustness and sparsity via two effective and complementary approaches. The LROLS method alone is capable of producing a very parsimonious model with excellent generalization performance. The D-optimality design criterion further enhances the model efficiency and robustness. An added advantage is that the user only needs to specify a weighting for the D-optimality cost in the combined model selecting criterion and the entire model construction procedure becomes automatic. The value of this weighting does not influence the model selection procedure critically and it can be chosen with ease from a wide range of values.