50 resultados para Database search Evidential value Bayesian decision theory Influence diagrams
Resumo:
More than thirty years ago, Wind's seminal review of research in market segmentation culminated with a research agenda for the subject area. In the intervening period, research has focused on the development of segmentation bases and models, segmentation research techniques and the identification of statistically sound solutions. Practical questions about implementation and the integration of segmentation into marketing strategy have received less attention, even though practitioners are known to struggle with the actual practice of segmentation. This special issue is motivated by this tension between theory and practice, which has shaped and continues to influence the research priorities for the field. Although many years may have elapsed since Wind's original research agenda, pressing questions about effectiveness and productivity apparently remain; namely: (i) concerns about the link between segmentation and performance, and its measurement; and (ii) the notion that productivity improvements arising from segmentation are only achievable if the segmentation process is effectively implemented. There were central themes to the call for papers for this special issue, which aims to develop our understanding of segmentation value, productivity and strategies, and managerial issues and implementation.
Resumo:
Social tagging has become very popular around the Internet as well as in research. The main idea behind tagging is to allow users to provide metadata to the web content from their perspective to facilitate categorization and retrieval. There are many factors that influence users' tag choice. Many studies have been conducted to reveal these factors by analysing tagging data. This paper uses two theories to identify these factors, namely the semiotics theory and activity theory. The former treats tags as signs and the latter treats tagging as an activity. The paper uses both theories to analyse tagging behaviour by explaining all aspects of a tagging system, including tags, tagging system components and the tagging activity. The theoretical analysis produced a framework that was used to identify a number of factors. These factors can be considered as categories that can be consulted to redirect user tagging choice in order to support particular tagging behaviour, such as cross-lingual tagging.
Resumo:
In probabilistic decision tasks, an expected value (EV) of a choice is calculated, and after the choice has been made, this can be updated based on a temporal difference (TD) prediction error between the EV and the reward magnitude (RM) obtained. The EV is measured as the probability of obtaining a reward x RM. To understand the contribution of different brain areas to these decision-making processes, functional magnetic resonance imaging activations related to EV versus RM (or outcome) were measured in a probabilistic decision task. Activations in the medial orbitofrontal cortex were correlated with both RM and with EV and confirmed in a conjunction analysis to extend toward the pregenual cingulate cortex. From these representations, TD reward prediction errors could be produced. Activations in areas that receive from the orbitofrontal cortex including the ventral striatum, midbrain, and inferior frontal gyrus were correlated with the TD error. Activations in the anterior insula were correlated negatively with EV, occurring when low reward outcomes were expected, and also with the uncertainty of the reward, implicating this region in basic and crucial decision-making parameters, low expected outcomes, and uncertainty.
Resumo:
Purpose – The purpose of this paper is to seek to shed light on the practice of incomplete corporate disclosure of quantitative Greenhouse gas (GHG) emissions and investigates whether external stakeholder pressure influences the existence, and separately, the completeness of voluntary GHG emissions disclosures by 431 European companies. Design/methodology/approach – A classification of reporting completeness is developed with respect to the scope, type and reporting boundary of GHG emissions based on the guidelines of the GHG Protocol, Global Reporting Initiative and the Carbon Disclosure Project. Logistic regression analysis is applied to examine whether proxies for exposure to climate change concerns from different stakeholder groups influence the existence and/or completeness of quantitative GHG emissions disclosure. Findings – From 2005 to 2009, on average only 15 percent of companies that disclose GHG emissions report them in a manner that the authors consider complete. Results of regression analyses suggest that external stakeholder pressure is a determinant of the existence but not the completeness of emissions disclosure. Findings are consistent with stakeholder theory arguments that companies respond to external stakeholder pressure to report GHG emissions, but also with legitimacy theory claims that firms can use carbon disclosure, in this case the incomplete reporting of emissions, as a symbolic act to address legitimacy exposures. Practical implications – Bringing corporate GHG emissions disclosure in line with recommended guidelines will require either more direct stakeholder pressure or, perhaps, a mandated disclosure regime. In the meantime, users of the data will need to carefully consider the relevance of the reported data and develop the necessary competencies to detect and control for its incompleteness. A more troubling concern is that stakeholders may instead grow to accept less than complete disclosure. Originality/value – The paper represents the first large-scale empirical study into the completeness of companies’ disclosure of quantitative GHG emissions and is the first to analyze these disclosures in the context of stakeholder pressure and its relation to legitimation.
Resumo:
The challenge of moving past the classic Window Icons Menus Pointer (WIMP) interface, i.e. by turning it ‘3D’, has resulted in much research and development. To evaluate the impact of 3D on the ‘finding a target picture in a folder’ task, we built a 3D WIMP interface that allowed the systematic manipulation of visual depth, visual aides, semantic category distribution of targets versus non-targets; and the detailed measurement of lower-level stimuli features. Across two separate experiments, one large sample web-based experiment, to understand associations, and one controlled lab environment, using eye tracking to understand user focus, we investigated how visual depth, use of visual aides, use of semantic categories, and lower-level stimuli features (i.e. contrast, colour and luminance) impact how successfully participants are able to search for, and detect, the target image. Moreover in the lab-based experiment, we captured pupillometry measurements to allow consideration of the influence of increasing cognitive load as a result of either an increasing number of items on the screen, or due to the inclusion of visual depth. Our findings showed that increasing the visible layers of depth, and inclusion of converging lines, did not impact target detection times, errors, or failure rates. Low-level features, including colour, luminance, and number of edges, did correlate with differences in target detection times, errors, and failure rates. Our results also revealed that semantic sorting algorithms significantly decreased target detection times. Increased semantic contrasts between a target and its neighbours correlated with an increase in detection errors. Finally, pupillometric data did not provide evidence of any correlation between the number of visible layers of depth and pupil size, however, using structural equation modelling, we demonstrated that cognitive load does influence detection failure rates when there is luminance contrasts between the target and its surrounding neighbours. Results suggest that WIMP interaction designers should consider stimulus-driven factors, which were shown to influence the efficiency with which a target icon can be found in a 3D WIMP interface.