96 resultados para allocation rules
Resumo:
Plants constantly sense the changes in their environment; when mineral elements are scarce, they often allocate a greater proportion of their biomass to the root system. This acclimatory response is a consequence of metabolic changes in the shoot and an adjustment of carbohydrate transport to the root. It has long been known that deficiencies of essential macronutrients (nitrogen, phosphorus, potassium and magnesium) result in an accumulation of carbohydrates in leaves and roots, and modify the shoot-to-root biomass ratio. Here, we present an update on the effects of mineral deficiencies on the expression of genes involved in primary metabolism in the shoot, the evidence for increased carbohydrate concentrations and altered biomass allocation between shoot and root, and the consequences of these changes on the growth and morphology of the plant root system.
Resumo:
Attentional allocation to emotional stimuli is often proposed to be driven by valence and in particular by negativity. However, many negative stimuli are also arousing leaving the question whether valence or arousal accounts for this effect. The authors examined whether the valence or the arousal level of emotional stimuli influences the allocation of spatial attention using a modified spatial cueing task. Participants responded to targets that were preceded by cues consisting of emotional pictures varying on arousal and valence. Response latencies showed that disengagement of spatial attention was slower for stimuli high in arousal than for stimuli low in arousal. The effect was independent of the valence of the pictures and not gender-specific. The findings support the idea that arousal affects the allocation of attention.
Resumo:
The experience of learning and using a second language (L2) has been shown to affect the grey matter (GM) structure of the brain. Importantly, GM density in several cortical and subcortical areas has been shown to be related to performance in L2 tasks. Here we show that bilingualism can lead to increased GM volume in the cerebellum, a structure that has been related to the processing of grammatical rules. Additionally, the cerebellar GM volume of highly proficient L2 speakers is correlated to their performance in a task tapping on grammatical processing in a L2, demonstrating the importance of the cerebellum for the establishment and use of grammatical rules in a L2.
Resumo:
The financial crisis of 2007-2009 and the subsequent reaction of the G20 have created a new global regulatory landscape. Within the EU, change of regulatory institutions is ongoing. The research objective of this study is to understand how institutional changes to the EU regulatory landscape may affect corresponding institutionalized operational practices within financial organizations and to understand the role of agency within this process. Our motivation is to provide insight into these changes from an operational management perspective, as well as to test Thelen and Mahoney?s (2010) modes of institutional change. Consequently, the study researched implementations of an Investment Management System with a rules-based compliance module within financial organizations. The research consulted compliance and risk managers, as well as systems experts. The study suggests that prescriptive regulations are likely to create isomorphic configurations of rules-based compliance systems, which consequently will enable the institutionalization of associated compliance practices. The study reveals the ability of some agents within financial organizations to control the impact of regulatory institutions, not directly, but through the systems and processes they adopt to meet requirements. Furthermore, the research highlights the boundaries and relationships between each mode of change as future avenues of research.
Resumo:
Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.
Resumo:
Despite an extensive market segmentation literature, applied academic studies which bridge segmentation theory and practice remain a priority for researchers. The need for studies which examine the segmentation implementation barriers faced by organisations is particularly acute. We explore segmentation implementation through the eyes of a European utilities business, by following its progress through a major segmentation project. The study reveals the character and impact of implementation barriers occurring at different stages in the segmentation process. By classifying the barriers, we develop implementation "rules" for practitioners which are designed to minimise their occurrence and impact. We further contribute to the literature by developing a deeper understanding of the mechanisms through which these implementation rules can be applied.
Resumo:
In Kazakhstan, a transitional nation in Central Asia, the development of public–private partnerships (PPPs) is at its early stage and increasingly of strategic importance. This case study investigates risk allocation in an ongoing project: the construction and operation of 11 kindergartens in the city of Karaganda in the concession form for 14 years. Drawing on a conceptual framework of effective risk allocation, the study identifies principal PPP risks, provides a critical assessment of how and in what way each partner bears a certain risk, highlights the reasons underpinning risk allocation decisions and delineates the lessons learned. The findings show that the government has effectively transferred most risks to the private sector partner, whilst both partners share the demand risk of childcare services and the project default risk. The strong elements of risk allocation include clear assignment of parties’ responsibilities, streamlined financing schemes and incentives to complete the main project phases on time. However, risk allocation has missed an opportunity to create incentives for service quality improvements and take advantage of economies of scale. The most controversial element of risk allocation, as the study finds, is a revenue stream that an operator is supposed to receive from the provision of services unrelated to childcare, as neither partner is able to mitigate this revenue risk. The article concludes that in the kindergartens’ PPP, the government has achieved almost complete transfer of risks to the private sector partner. However, the costs of transfer are extensive government financial outlays that seriously compromise the PPP value for money.
Resumo:
In this paper we propose methods for computing Fresnel integrals based on truncated trapezium rule approximations to integrals on the real line, these trapezium rules modified to take into account poles of the integrand near the real axis. Our starting point is a method for computation of the error function of complex argument due to Matta and Reichel (J Math Phys 34:298–307, 1956) and Hunter and Regan (Math Comp 26:539–541, 1972). We construct approximations which we prove are exponentially convergent as a function of N , the number of quadrature points, obtaining explicit error bounds which show that accuracies of 10−15 uniformly on the real line are achieved with N=12 , this confirmed by computations. The approximations we obtain are attractive, additionally, in that they maintain small relative errors for small and large argument, are analytic on the real axis (echoing the analyticity of the Fresnel integrals), and are straightforward to implement.
Resumo:
Scene classification based on latent Dirichlet allocation (LDA) is a more general modeling method known as a bag of visual words, in which the construction of a visual vocabulary is a crucial quantization process to ensure success of the classification. A framework is developed using the following new aspects: Gaussian mixture clustering for the quantization process, the use of an integrated visual vocabulary (IVV), which is built as the union of all centroids obtained from the separate quantization process of each class, and the usage of some features, including edge orientation histogram, CIELab color moments, and gray-level co-occurrence matrix (GLCM). The experiments are conducted on IKONOS images with six semantic classes (tree, grassland, residential, commercial/industrial, road, and water). The results show that the use of an IVV increases the overall accuracy (OA) by 11 to 12% and 6% when it is implemented on the selected and all features, respectively. The selected features of CIELab color moments and GLCM provide a better OA than the implementation over CIELab color moment or GLCM as individuals. The latter increases the OA by only ∼2 to 3%. Moreover, the results show that the OA of LDA outperforms the OA of C4.5 and naive Bayes tree by ∼20%. © 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.JRS.8.083690]
Resumo:
Much has been written about Wall Street and the global financial crisis (GFC). From a fraudulent derivatives market to a contestable culture of banking bonuses, culpability has been examined within the frames of American praxis, namely that of American exceptionalism. This study begins with an exploratory analysis of non-US voices concerning the nature of the causes of the GFC. The analysis provides glimpses of the globalized extent of assumptions shared, but not debated within the globalization convergence of financial markets as the neo-liberal project. Practical and paradigmatic tensions are revealed in the capture of a London-based set of views articulated by senior financial executives of financial service organizations, the outcomes of which are not overly optimistic for any significant change in praxis within the immediate future.
Resumo:
Purpose The research objective of this study is to understand how institutional changes to the EU regulatory landscape may affect corresponding institutionalized operational practices within financial organizations. Design/methodology/approach The study adopts an Investment Management System as its case and investigates different implementations of this system within eight financial organizations, predominantly focused on investment banking and asset management activities within capital markets. At the systems vendor site, senior systems consultants and client relationship managers were interviewed. Within the financial organizations, compliance, risk and systems experts were interviewed. Findings The study empirically tests modes of institutional change. Displacement and Layering were found to be the most prevalent modes. However, the study highlights how the outcomes of Displacement and Drift may be similar in effect as both modes may cause compliance gaps. The research highlights how changes in regulations may create gaps in systems and processes which, in the short term, need to be plugged by manual processes. Practical implications Vendors abilities to manage institutional change caused by Drift, Displacement, Layering and Conversion and their ability to efficiently and quickly translate institutional variables into structured systems has the power to ease the pain and cost of compliance as well as reducing the risk of breeches by reducing the need for interim manual systems. Originality/value The study makes a contribution by applying recent theoretical concepts of institutional change to the topic of regulatory change uses this analysis to provide insight into the effects of this new environment