39 resultados para judgements
Resumo:
Background: Research into mental-health risks has tended to focus on epidemiological approaches and to consider pieces of evidence in isolation. Less is known about the particular factors and their patterns of occurrence that influence clinicians’ risk judgements in practice. Aims: To identify the cues used by clinicians to make risk judgements and to explore how these combine within clinicians’ psychological representations of suicide, self-harm, self-neglect, and harm to others. Method: Content analysis was applied to semi-structured interviews conducted with 46 practitioners from various mental-health disciplines, using mind maps to represent the hierarchical relationships of data and concepts. Results: Strong consensus between experts meant their knowledge could be integrated into a single hierarchical structure for each risk. This revealed contrasting emphases between data and concepts underpinning risks, including: reflection and forethought for suicide; motivation for self-harm; situation and context for harm to others; and current presentation for self-neglect. Conclusions: Analysis of experts’ risk-assessment knowledge identified influential cues and their relationships to risks. It can inform development of valid risk-screening decision support systems that combine actuarial evidence with clinical expertise.
Resumo:
Self-criticism is strongly correlated with a range of psychopathologies, such as depression, eating disorders and anxiety. In contrast, self-reassurance is inversely associated with such psychopathologies. Despite the importance of self-judgements and evaluations, little is known about the neurophysiology of these internal processes. The current study therefore used a novel fMRI task to investigate the neuronal correlates of self-criticism and self-reassurance. Participants were presented statements describing two types of scenario, with the instruction to either imagine being self-critical or self-reassuring in that situation. One scenario type focused on a personal setback, mistake or failure, which would elicit negative emotions, whilst the second was of a matched neutral event. Self-criticism was associated with activity in lateral prefrontal cortex (PFC) regions and dorsal anterior cingulate (dAC), therefore linking self-critical thinking to error processing and resolution, and also behavioural inhibition. Self-reassurance was associated with left temporal pole and insula activation, suggesting that efforts to be self-reassuring engage similar regions to expressing compassion and empathy towards others. Additionally, we found a dorsal/ventral PFC divide between an individual's tendency to be self-critical or self-reassuring. Using multiple regression analyses, dorsolateral PFC activity was positively correlated with high levels of self-criticism (assessed via self-report measure), suggesting greater error processing and behavioural inhibition in such individuals. Ventrolateral PFC activity was positively correlated with high self-reassurance. Our findings may have implications for the neural basis of a range of mood disorders that are characterised by a preoccupation with personal mistakes and failures, and a self-critical response to such events.
Resumo:
In perceptual terms, the human body is a complex 3d shape which has to be interpreted by the observer to judge its attractiveness. Both body mass and shape have been suggested as strong predictors of female attractiveness. Normally body mass and shape co-vary, and it is difficult to differentiate their separate effects. A recent study suggested that altering body mass does not modulate activity in the reward mechanisms of the brain, but shape does. However, using computer generated female body-shaped greyscale images, based on a Principal Component Analysis of female bodies, we were able to construct images which covary with real female body mass (indexed with BMI) and not with body shape (indexed with WHR), and vice versa. Twelve observers (6 male and 6 female) rated these images for attractiveness during an fMRI study. The attractiveness ratings were correlated with changes in BMI and not WHR. Our primary fMRI results demonstrated that in addition to activation in higher visual areas (such as the extrastriate body area), changing BMI also modulated activity in the caudate nucleus, and other parts of the brain reward system. This shows that BMI, not WHR, modulates reward mechanisms in the brain and we infer that this may have important implications for judgements of ideal body size in eating disordered individuals.
Resumo:
This dissertation investigates the very important and current problem of modelling human expertise. This is an apparent issue in any computer system emulating human decision making. It is prominent in Clinical Decision Support Systems (CDSS) due to the complexity of the induction process and the vast number of parameters in most cases. Other issues such as human error and missing or incomplete data present further challenges. In this thesis, the Galatean Risk Screening Tool (GRiST) is used as an example of modelling clinical expertise and parameter elicitation. The tool is a mental health clinical record management system with a top layer of decision support capabilities. It is currently being deployed by several NHS mental health trusts across the UK. The aim of the research is to investigate the problem of parameter elicitation by inducing them from real clinical data rather than from the human experts who provided the decision model. The induced parameters provide an insight into both the data relationships and how experts make decisions themselves. The outcomes help further understand human decision making and, in particular, help GRiST provide more accurate emulations of risk judgements. Although the algorithms and methods presented in this dissertation are applied to GRiST, they can be adopted for other human knowledge engineering domains.
Resumo:
This thesis explores the process of developing a principled approach for translating a model of mental-health risk expertise into a probabilistic graphical structure. Probabilistic graphical structures can be a combination of graph and probability theory that provide numerous advantages when it comes to the representation of domains involving uncertainty, domains such as the mental health domain. In this thesis the advantages that probabilistic graphical structures offer in representing such domains is built on. The Galatean Risk Screening Tool (GRiST) is a psychological model for mental health risk assessment based on fuzzy sets. In this thesis the knowledge encapsulated in the psychological model was used to develop the structure of the probability graph by exploiting the semantics of the clinical expertise. This thesis describes how a chain graph can be developed from the psychological model to provide a probabilistic evaluation of risk that complements the one generated by GRiST’s clinical expertise by the decomposing of the GRiST knowledge structure in component parts, which were in turned mapped into equivalent probabilistic graphical structures such as Bayesian Belief Nets and Markov Random Fields to produce a composite chain graph that provides a probabilistic classification of risk expertise to complement the expert clinical judgements
Resumo:
Few works address methodological issues of how to conduct strategy-as-practice research and even fewer focus on how to analyse the subsequent data in ways that illuminate strategy as an everyday, social practice. We address this gap by proposing a quantitative method for analysing observational data, which can complement more traditional qualitative methodologies. We propose that rigorous but context-sensitive coding of transcripts can render everyday practice analysable statistically. Such statistical analysis provides a means for analytically representing patterns and shifts within the mundane, repetitive elements through which practice is accomplished. We call this approach the Event Database (EDB) and it consists of five basic coding categories that help us capture the stream of practice. Indexing codes help to index or categorise the data, in order to give context and offer some basic information about the event under discussion. Indexing codes are descriptive codes, which allow us to catalogue and classify events according to their assigned characteristics. Content codes are to do with the qualitative nature of the event; this is the essence of the event. It is a description that helps to inform judgements about the phenomenon. Nature codes help us distinguish between discursive and tangible events. We include this code to acknowledge that some events differ qualitatively from other events. Type events are codes abstracted from the data in order to help us classify events based on their description or nature. This involves significantly more judgement than the index codes but consequently is also more meaningful. Dynamics codes help us capture some of the movement or fluidity of events. This category has been included to let us capture the flow of activity over time.
Resumo:
This paper examines the beliefs and practices about the integration of grammar and skills teaching reported by 176 English language teachers from 18 countries. Teachers completed a questionnaire which elicited beliefs about grammar teaching generally as well as specific beliefs and reported practices about the integration of grammar and skills teaching. Teachers expressed strong beliefs in the need to avoid teaching grammar in isolation and reported high levels of integration of grammar in their practices. This study also examines how teachers conceptualize integration and the sources of evidence they draw on in assessing the effectiveness of their instructional practices in teaching grammar. The major findings for this paper stem from an analysis of these two issues. A range of ways in which teachers understood integration are identified and classified into two broad orientations which we label temporal and contextual. An analysis of the evidence which teachers cited in making judgements about the effectiveness of their grammar teaching practices showed that it was overwhelmingly practical and experiential and did not refer in any explicit way to second language acquisition theory. Given the volume of available theory about L2 grammar teaching generally and integration specifically, the lack of direct reference to such evidence in teachers’ accounts is noteworthy.
Resumo:
Both organizational justice and behavioural ethics are concerned with questions of 'right and wrong' in the context of work organizations. Until recently they have developed largely independently of each other, choosing to focus on subtly different concerns, constructs and research questions. The last few years have, however, witnessed a significant growth in theoretical and empirical research integrating these closely related academic specialities. We review the organizational justice literature, illustrating the impact of behavioural ethics research on important fairness questions. We argue that organizational justice research is focused on four reoccurring issues: (i) why justice at work matters to individuals; (ii) how justice judgements are formed; (iii) the consequences of injustice; and (iv) the factors antecedent to justice perceptions. Current and future justice research has begun and will continue borrowing from the behavioural ethics literature in answering these questions. © The Author(s) 2013.
Resumo:
This study re-examines the afterimage paradigm which claims to show that a minority produces a conversion in a task involving afterimage judgements (more private influence than public influence) as opposed to mere compliance produced by a majority. Subsequent failures to replicate this finding have suggested that the changes in the afterimages could be attributed to increased attention due to an ambiguous stimulus coupled with subject suspiciousness. This study attempted to replicate the original experiment but with an unambiguous stimulus in order to remove potential biases. The results showed shifts in afterimages consistent with the increased attention hypothesis for a minority and majority and these were unaffected by the level of suspiciousness reported by the subjects. Additional data shows that no shifts were found in a no-influence control condition showing that shifts were related to exposure to a deviant source and not to response repetition.
Resumo:
In a Data Envelopment Analysis model, some of the weights used to compute the efficiency of a unit can have zero or negligible value despite of the importance of the corresponding input or output. This paper offers an approach to preventing inputs and outputs from being ignored in the DEA assessment under the multiple input and output VRS environment, building on an approach introduced in Allen and Thanassoulis (2004) for single input multiple output CRS cases. The proposed method is based on the idea of introducing unobserved DMUs created by adjusting input and output levels of certain observed relatively efficient DMUs, in a manner which reflects a combination of technical information and the decision maker's value judgements. In contrast to many alternative techniques used to constrain weights and/or improve envelopment in DEA, this approach allows one to impose local information on production trade-offs, which are in line with the general VRS technology. The suggested procedure is illustrated using real data. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Performance evaluation in conventional data envelopment analysis (DEA) requires crisp numerical values. However, the observed values of the input and output data in real-world problems are often imprecise or vague. These imprecise and vague data can be represented by linguistic terms characterised by fuzzy numbers in DEA to reflect the decision-makers' intuition and subjective judgements. This paper extends the conventional DEA models to a fuzzy framework by proposing a new fuzzy additive DEA model for evaluating the efficiency of a set of decision-making units (DMUs) with fuzzy inputs and outputs. The contribution of this paper is threefold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA, (2) we propose a new fuzzy additive DEA model derived from the a-level approach and (3) we demonstrate the practical aspects of our model with two numerical examples and show its comparability with five different fuzzy DEA methods in the literature. Copyright © 2011 Inderscience Enterprises Ltd.
Resumo:
Perceptions about the quality of learning and teaching in Higher Education has for many years focused upon the application of market based principles. This includes the notion of students as “customers” of the Higher Education Institutions (HEI) service. We argue that the application of the customer analogy is unhelpful however, as students this approach is likely to affect student expectations about the service and their judgements about its quality. The purpose of this paper is to propose a study consisting of a series of interventions to develop a culture of value co-creation at a UK based HEI. By introducing CCV principles, it is hoped to steer students away from seeing themselves as “customers”, and passive recipients of in the learning and teaching process, to one where they take responsibility for their own learning experience, to be explored and acted upon in partnership with their lecturers and other stakeholders.
Resumo:
Evaluations of semantic search systems are generally small scale and ad hoc due to the lack of appropriate resources such as test collections, agreed performance criteria and independent judgements of performance. By analysing our work in building and evaluating semantic tools over the last five years, we conclude that the growth of the semantic web led to an improvement in the available resources and the consequent robustness of performance assessments. We propose two directions for continuing evaluation work: the development of extensible evaluation benchmarks and the use of logging parameters for evaluating individual components of search systems.
Resumo:
Purpose: (1) To devise a model-based method for estimating the probabilities of binocular fusion, interocular suppression and diplopia from psychophysical judgements, (2) To map out the way fusion, suppression and diplopia vary with binocular disparity and blur of single edges shown to each eye, (3) To compare the binocular interactions found for edges of the same vs opposite contrast polarity. Methods: Test images were single, horizontal, Gaussian-blurred edges, with blur B = 1-32 min arc, and vertical disparity 0-8.B, shown for 200 ms. In the main experiment, observers reported whether they saw one central edge, one offset edge, or two edges. We argue that the relation between these three response categories and the three perceptual states (fusion, suppression, diplopia) is indirect and likely to be distorted by positional noise and criterion effects, and so we developed a descriptive, probabilistic model to estimate both the perceptual states and the noise/criterion parameters from the data. Results: (1) Using simulated data, we validated the model-based method by showing that it recovered fairly accurately the disparity ranges for fusion and suppression, (2) The disparity range for fusion (Panum's limit) increased greatly with blur, in line with previous studies. The disparity range for suppression was similar to the fusion limit at large blurs, but two or three times the fusion limit at small blurs. This meant that diplopia was much more prevalent at larger blurs, (3) Diplopia was much more frequent when the two edges had opposite contrast polarity. A formal comparison of models indicated that fusion occurs for same, but not opposite, polarities. Probability of suppression was greater for unequal contrasts, and it was always the lower-contrast edge that was suppressed. Conclusions: Our model-based data analysis offers a useful tool for probing binocular fusion and suppression psychophysically. The disparity range for fusion increased with edge blur but fell short of complete scale-invariance. The disparity range for suppression also increased with blur but was not close to scale-invariance. Single vision occurs through fusion, but also beyond the fusion range, through suppression. Thus suppression can serve as a mechanism for extending single vision to larger disparities, but mainly for sharper edges where the fusion range is small (5-10 min arc). For large blurs the fusion range is so much larger that no such extension may be needed. © 2014 The College of Optometrists.
Resumo:
Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.