864 resultados para Practice-based Approach
Resumo:
Armed with the ‘equity’ and ‘conservation’ arguments that have a deep resonance with farming communities, developing countries are crafting a range of measures designed to protect farmers’ access to innovations, reward their contributions to the conservation and enhancement of plant genetic resources and provide incentives for sustained on-farm conservation. These measures range from the commericialization of farmers’ varieties to the conferment of a set of legally enforceable rights on farming communities – the exercise of which is expected to provide economic rewards to those responsible for on-farm conservation and innovation. The rights-based approach has been the cornerstone of legislative provision for implementing farmers’ rights in most developing countries. In drawing up these measures, developing countries do not appear to have systematically examined or provided for the substantial institutional capacity required for the effective implementation of farmers’ rights provisions. The lack of institutional capacity threatens to undermine any prospect of serious implementation of these provisions. More importantly, the expectation that significant incentives for on-farm conservation and innovation will flow from these ‘rights’ may be based on a flawed understanding of the economics of intellectual property rights. While farmers’ rights may provide only limited rewards for conservation, they may still have the effect of diluting the incentives for innovative institutional breeding programs – with the private sector increasingly relying on non-IPR instruments to profit from innovation. The focus on a rights-based approach may also draw attention away from alternative stewardship-based approaches to the realization of farmers’ rights objectives.
Resumo:
CONTEXT. Rattus tanezumi is a serious crop pest within the island of Luzon, Philippines. In intensive flood-irrigated rice field ecosystems of Luzon, female R. tanezumi are known to primarily nest within the tillers of ripening rice fields and along the banks of irrigation canals. The nesting habits of R. tanezumi in complex rice–coconut cropping systems are unknown. AIMS. To identify the natal nest locations of R. tanezumi females in rice–coconut systems of the Sierra Madre Biodiversity Corridor (SMBC), Luzon, during the main breeding season to develop a management strategy that specifically targets their nesting habitat. METHODS. When rice was at the booting to ripening stage, cage-traps were placed in rice fields adjacent to coconut habitat. Thirty breeding adult R. tanezumi females were fitted with radio-collars and successfully tracked to their nest sites. KEY RESULTS. Most R. tanezumi nests (66.7%) were located in coconut groves, five nests (16.7%) were located in rice fields and five nests (16.7%) were located on the rice field edge. All nests were located above ground level and seven nests were located in coconut tree crowns. The median distance of nest sites to the nearest rice field was 22.5m. Most nest site locations had good cover of ground vegetation and understorey vegetation, but low canopy cover. Only one nest location had an understorey vegetation height of less than 20 cm. CONCLUSIONS. In the coastal lowland rice–coconut cropping systems of the SMBC, female R. tanezumi showed a preference for nesting in adjacent coconut groves. This is contrary to previous studies in intensive flood-irrigated rice ecosystems of Luzon, where the species nests mainly in the banks of irrigation canals. It is important to understand rodent breeding ecology in a specific ecosystem before implementing appropriate management strategies. IMPLICATIONS. In lowland rice–coconut cropping systems, coconut groves adjacent to rice fields should be targeted for the 20 management of R. tanezumi nest sites during the main breeding season as part of an integrated ecologically based approach to rodent pest management.
Resumo:
This article applies FIMIX-PLS segmentation methodology to detect and explore unanticipated reactions to organisational strategy among stakeholder segments. For many large organisations today, the tendency to apply a “one-size-fits-all” strategy to members of a stakeholder population, commonly driven by a desire for simplicity, efficiency and fairness, may actually result in unanticipated consequences amongst specific subgroups within the target population. This study argues that it is critical for organisations to understand the varying and potentially harmful effects of strategic actions across differing, and previously unidentified, segments within a stakeholder population. The case of a European revenue service that currently focuses its strategic actions on building trust and compliant behaviour amongst taxpayers is used as the context for this study. FIMIX-PLS analysis is applied to a sample of 501 individual taxpayers, while a novel PLS-based approach for assessing measurement model invariance that can be applied to both reflective and formative measures is also introduced for the purpose of multi-group comparisons. The findings suggest that individual taxpayers can be split into two equal-sized segments with highly differentiated characteristics and reactions to organisational strategy and communications. Compliant behaviour in the first segment (n = 223), labelled “relationships centred on trust,” is mainly driven through positive service experiences and judgements of competence, while judgements of benevolence lead to the unanticipated reaction of increasing distrust among this group. Conversely, compliant behaviour in the second segment (n = 278), labelled “relationships centred on distrust,” is driven by the reduction of fear and scepticism towards the revenue service, which is achieved through signalling benevolence, reduced enforcement and the lower incidence of negative stories. In this segment, the use of enforcement has the unanticipated and counterproductive effect of ultimately reducing compliant behaviour.
Resumo:
A recent article in this journal challenged claims that a human rights framework should be applied to drug control. This article questions the author’s assertions and reframes them in the context of socio-legal drug scholarship, aiming to build on the discourse concerning human rights and drug use. It is submitted that a rights-based approach is a necessary, indeed obligatory, ethical and legal framework through which to address drug use and that international human rights law provides the proper scope for determining where interferences with individual human rights might be justified on certain, limited grounds.
Resumo:
For more than half a century, emotion researchers have attempted to establish the dimensional space that most economically accounts for similarities and differences in emotional experience. Today, many researchers focus exclusively on two-dimensional models involving valence and arousal. Adopting a theoretically based approach, we show for three languages that four dimensions are needed to satisfactorily represent similarities and differences in the meaning of emotion words. In order of importance, these dimensions are evaluation-pleasantness, potency-control, activation-arousal, and unpredictability. They were identified on the basis of the applicability of 144 features representing the six components of emotions: (a) appraisals of events, (b) psychophysiological changes, (c) motor expressions, (d) action tendencies, (e) subjective experiences, and (f) emotion regulation.
Resumo:
In this article, I study the impacts of a specific incentives-based approach to safety regulation, namely the control of quality through sampling and threatening penalties when quality fails to meet some minimum standard. The welfare-improving impacts of this type of scheme seem high and are cogently illustrated in a recent contribution by Segerson, which stimulated many of the ideas in this paper. For this reason, the reader is referred to Segerson for a background on some of the motivation, and throughout, I make an effort to indicate differences between the two approaches. There are three major differences. First, I dispense with the calculus as much as possible, seeking readily interpreted, closedform solutions to illustrate the main ideas. Second, (strategically optimal, symmetric) Nash equilibria are the mainstay of each of the current models. Third, in the uncertainquality- provision equilibria, each of the Nash suppliers chooses the level of the lower bound for quality as a control and offers a draw from its (private) distribution in a contribution to the (public) pool of quality.
Resumo:
As a major mode of intraseasonal variability, which interacts with weather and climate systems on a near-global scale, the Madden – Julian Oscillation (MJO) is a crucial source of predictability for numerical weather prediction (NWP) models. Despite its global significance and comprehensive investigation, improvements in the representation of the MJO in an NWP context remain elusive. However, recent modifications to the model physics in the ECMWF model led to advances in the representation of atmospheric variability and the unprecedented propagation of the MJO signal through the entire integration period. In light of these recent advances, a set of hindcast experiments have been designed to assess the sensitivity of MJO simulation to the formulation of convection. Through the application of established MJO diagnostics, it is shown that the improvements in the representation of the MJO can be directly attributed to the modified convective parametrization. Furthermore, the improvements are attributed to the move from a moisture-convergent- to a relative-humidity-dependent formulation for organized deep entrainment. It is concluded that, in order to understand the physical mechanisms through which a relative-humidity-dependent formulation for entrainment led to an improved simulation of the MJO, a more process-based approach should be taken. T he application of process-based diagnostics t o t he hindcast experiments presented here will be the focus of Part II of this study.
Resumo:
Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.