16 resultados para Judge

em Aston University Research Archive


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In perceptual terms, the human body is a complex 3d shape which has to be interpreted by the observer to judge its attractiveness. Both body mass and shape have been suggested as strong predictors of female attractiveness. Normally body mass and shape co-vary, and it is difficult to differentiate their separate effects. A recent study suggested that altering body mass does not modulate activity in the reward mechanisms of the brain, but shape does. However, using computer generated female body-shaped greyscale images, based on a Principal Component Analysis of female bodies, we were able to construct images which covary with real female body mass (indexed with BMI) and not with body shape (indexed with WHR), and vice versa. Twelve observers (6 male and 6 female) rated these images for attractiveness during an fMRI study. The attractiveness ratings were correlated with changes in BMI and not WHR. Our primary fMRI results demonstrated that in addition to activation in higher visual areas (such as the extrastriate body area), changing BMI also modulated activity in the caudate nucleus, and other parts of the brain reward system. This shows that BMI, not WHR, modulates reward mechanisms in the brain and we infer that this may have important implications for judgements of ideal body size in eating disordered individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using a video-review procedure, multiple perceivers carried out mind-reading tasks of multiple targets at different levels of acquaintanceship (50 dating couples, friends of the dating partners, and strangers). As predicted, the authors found that mind-reading accuracy was (a) higher as a function of increased acquaintanceship, (b) relatively unaffected by target effects, (c) influenced by individual differences in perceivers' ability, and (d) higher for female than male perceivers. In addition, superior mind-reading accuracy (for dating couples and friends) was related to higher relationship satisfaction, closeness, and more prior disclosure about the problems discussed, but only under moderating conditions related to sex and relationship length. The authors conclude that the nature of the relationship between the perceiver and the target occupies a pivotal role in determining mind-reading accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in functional brain imaging have allowed the development of new investigative techniques with clinical application—ranging from presurgical mapping of eloquent cortex to identifying cortical regions involved in religious experiences. Similarly a variety of methods are available to referring physicians, ranging from metabolic measures such as functional magnetic resonance imaging and positron emission tomography to measurements based on electrical activity such as electroencephalography and magnetoencephalography. However, there are no universal benchmarks by which to judge between these methods. In this study we attempt to develop a standard for functional localisation, based on the known functional organisation of somatosensory cortex. Studies have shown spatially distinct sites of brain activity in response to stimulation of various body parts. Generally these studies have focused on areas with large cortical representations, such as the index finger and face. We tested the limits of magnetoencephalography source localisation by stimulation of body parts, namely the clunis and the cubitus, that map to proximal and relatively poorly represented regions of somatosensory cortex.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Testing whether an observed distribution of observations deviates from normality is a common type of statistical test available in statistics software. Most software offer two ways of judging whether there are significant deviations of the observed from the expected distributions, viz., chi-square and the KS test. These tests have different sensitivities and problems and often give conflicting results. The results of these tests together with observations of the shape of the observed distribution should be used to judge normality.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We examined the effect of grouping by the alignment of implicit axes on the perception of multiple shapes, using a patient (GK) who shows simultanagnosia as part of Blint's syndrome. Five experiments demonstrated that: (1) GK was better able to judge the orientation of a global configuration if the constituent local shapes were aligned with their major axes than if they were aligned with their edges; (2) this axis information was used implicitly, since GK was unable to discriminate between configurations of axis-aligned and edge-aligned shapes; (3) GK's sensitivity to axis-alignment persisted even when the orientations of local shapes were kept constant, indicating some form of cooperative effect between the local elements; (4) axis-alignment of shapes also facilitated his ability to discriminate single-item from multi-item configurations; (5) the effect of axis-alignment could be attributed, at least partially, to the degree to which there was matching between the orientations of local shapes and the global configuration. Taken together, the results suggest that axis-based grouping can support the selection of multiple objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are two aspects of PD of particular interest to optometrists. First, PD patients can develop a range of visual problems including those affecting eye movement, pupillary function, and in complex visual functions involving the ability to judge distance or make out the shape of an object. Second, the symptoms of PD can be treated successfully using a variety of drugs, some of which have significant ocular adverse reactions (OAR). This article describes the general features of PD, the dopamine neurotransmitter system and its relevance to eye symptoms, the visual symptoms reported in PD, and the OAR that have been reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The work described in the following pages was carried out at various sites in the Rod Division of the Delta Metal Company. Extensive variation in the level of activity in the industry during the years 1974 to I975 had led to certain inadequacies being observed 1n the traditional cost control procedure. In an attempt to remedy this situation it was suggested that a method be found of constructing a system to improve the flexibility of cost control procedures. The work involved an assimilation of the industrial and financial environment via pilot studies which would later prove invaluable to home in on the really interesting and important areas. Weaknesses in the current systems which came to light made the methodology of data collection and the improvement of cost control and profit planning procedures easier to adopt. Because of the requirements of the project to investigate the implications of Cost behaviour for profit planning and control, the next stage of the research work was to utilise the on-site experience to examine at a detailed level the nature of cost behaviour. The analysis of factory costs then showed that certain costs, which were the most significant exhibited a stable relationship with respect to some known variable, usually a specific measure of Output. These costs were then formulated in a cost model, to establish accurate standards in a complex industrial setting in order to provide a meaningful comparison against which to judge actual performance. The necessity of a cost model was •reinforced by the fact that the cost behaviour found to exist was, in the main, a step function, and this complex cost behaviour, the traditional cost and profit planning procedures could not possibly incorporate. Already implemented from this work is the establishment of the post of information officer to co-ordinate data collection and information provision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study has been conceived with the primary objective of identifying and evaluating the financial aspects of the transformation in country/company relations of the international oil industry from the traditional concessionary system to the system of governmental participation in the ownership and operation of oil concessions. The emphasis of the inquiry was placed on assembling a case study of the oil exploitation arrangements of Libya. Through a comprehensive review of the literature, the sociopolitical factors surrounding the international oil business were identified and examined in an attempt to see their influence on contractual arrangements and particularly to gauge the impact of any induced contractual changes on the revenue benefit accruing to the host country from its oil operations. Some comparative analyses were made in the study to examine the viability of the Libyan participation deals both as an investment proposal and as a system of conducting oil activities in the country. The analysis was carried out in the light of specific hypotheses to assess the relative impact of the participation scheme in comparison with the alternative concessionary model on the net revenue resulting to the government from oil operations and the relative effect on the level of research and development within the industry. A discounted cash flow analysis was conducted to measure inputs and outputs of the comparative models and judge their revenue benefits. Then an empirical analysis was carried out to detect any significant behavioural changes in the exploration and development effort associated with the different oil exploitation systems. Results of the investigation of revenues support the argument that the mere introduction of the participation system has not resulted in a significant revenue benefit to the host government. Though there has been a significant increase in government revenue, associated with the period following the emergence of the participation agreements, this increase was mainly due to socio-economic factors other than the participation scheme. At the same time the empirical results have shown an association of the participation scheme with a decline of the oil industry's research and development efforts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this article, it is argued that reflexivity is integral to experiential qualitative research in psychology. Reflexivity has been defined in many ways. Woolgar’s continuum of reflexivity though provides a useful gauge by which to judge whether a researcher is involved in simple reflection or reflexivity. The article demonstrates the benefits of adopting a reflexive attitude by presenting “challenge-to-competency.” The author’s encounter with Sarah will help illustrate the role of reflexivity both in data generation and in interpretative analysis. To close, it is proposed that reflexivity as hermeneutic reflection, with its grounding in hermeneutics and phenomenology, is a useful construct for guiding our engagement in reflexivity in experiential qualitative research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The specific objective of the research was to evaluate proprietary audit systems. Proprietary audit systems comprise question sets containing approximately 500 questions dealing with selected aspects of health and safety management. Each question is allotted a number of points and an organisation seeks to judge its health and safety performance by the overall score achieved in the audit. Initially it was considered that the evaluation method might involve comparing the proprietary audit scores with other methods of measuring safety performance. However, what appeared to be missing in the first instance was information that organisations could use to compare the contrast question set content against their own needs. A technique was developed using the computer database FileMaker Pro. This enables questions in an audit to be sorted into categories using a process of searching for key words. Questions that are not categorised by word searching can be identified and sorted manually. The process can be completed in 2-3 hours which is considerably faster than manual categorisation of questions which typically takes about 10 days. The technique was used to compare and contrast three proprietary audits: ISRS, CHASE and QSA. Differences and similarities between these audits were successfully identified. It was concluded that in general proprietary audits need to focus to a greater extent on identifying strengths and weaknesses in occupational health and safety management systems. To do this requires the inclusion of more probing questions which consider whether risk control measures are likely to be successful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Evaluation and benchmarking in content-based image retrieval has always been a somewhat neglected research area, making it difficult to judge the efficacy of many presented approaches. In this paper we investigate the issue of benchmarking for colour-based image retrieval systems, which enable users to retrieve images from a database based on lowlevel colour content alone. We argue that current image retrieval evaluation methods are not suited to benchmarking colour-based image retrieval systems, due in main to not allowing users to reflect upon the suitability of retrieved images within the context of a creative project and their reliance on highly subjective ground-truths. As a solution to these issues, the research presented here introduces the Mosaic Test for evaluating colour-based image retrieval systems, in which test-users are asked to create an image mosaic of a predetermined target image, using the colour-based image retrieval system that is being evaluated. We report on our findings from a user study which suggests that the Mosaic Test overcomes the major drawbacks associated with existing image retrieval evaluation methods, by enabling users to reflect upon image selections and automatically measuring image relevance in a way that correlates with the perception of many human assessors. We therefore propose that the Mosaic Test be adopted as a standardised benchmark for evaluating and comparing colour-based image retrieval systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A sequence of constant-frequency tones can promote streaming in a subsequent sequence of alternating-frequency tones, but why this effect occurs is not fully understood and its time course has not been investigated. Experiment 1 used a 2.0-s-long constant-frequency inducer (10 repetitions of a low-frequency pure tone) to promote segregation in a subsequent, 1.2-s test sequence of alternating low- and high-frequency tones. Replacing the final inducer tone with silence substantially reduced reported test-sequence segregation. This reduction did not occur when either the 4th or 7th inducer was replaced with silence. This suggests that a change at the induction/test-sequence boundary actively resets build-up, rather than less segregation occurring simply because fewer inducer tones were presented. Furthermore, Experiment 2 found that a constant-frequency inducer produced its maximum segregation-promoting effect after only three tones—this contrasts with the more gradual build-up typically observed for alternating-frequency sequences. Experiment 3 required listeners to judge continuously the grouping of 20-s test sequences. Constant-frequency inducers were considerably more effective at promoting segregation than alternating ones; this difference persisted for ~10 s. In addition, resetting arising from a single deviant (longer tone) was associated only with constant-frequency inducers. Overall, the results suggest that constant-frequency inducers promote segregation by capturing one subset of test-sequence tones into an ongoing, preestablished stream, and that a deviant tone may reduce segregation by disrupting this capture. These findings offer new insight into the dynamics of stream segregation, and have implications for the neural basis of streaming and the role of attention in stream formation. (PsycINFO Database Record (c) 2013 APA, all rights reserved)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Online case studies. Managing Innovation is an established, bestselling text for MBA, MSc and advanced undergraduate courses on management of technology, innovation management and entrepreneurship. It is also used widely by managers in both the service and manufacturing sectors. Now in its fourth edition, Managing Innovation has been fully revised and updated based on extensive user feedback to incorporate the latest findings and techniques in innovation management. The authors have included a new and more explicit innovation model, which is used throughout the book and have introduced two new features – Research Notes and Views from the Front Line – to incorporate more real life case material into the book. The strong evidence–based and practical approach makes this a must–read for anyone studying or working within innovation. An extensive website accompanies this text at www.managing–innovation.com. Readers can browse an online database of audio and video clips, as well as case study material, interactive exercises and tools for innovation, whilst lecturers can find additional support material including instructor slides and teaching guides and tips. "Tidd and Bessant's text has become a standard for students and practitioners of innovation. They offer a lively account on innovation management full of interesting and new examples, but one that at the same is rigorously anchored in what we have learned over the last thirty years on how to manage that ultimate business challenge of renewing products, processes, and business models. Those who want to innovate must read this book." — Professor Arnoud De Meyer, Director, Judge Business School, University of Cambridge, UK "Innovation matters and this book by two leaders in the field which is clear and practical as well as rigorous should be essential reading for all seeking to study or to become involved in innovation." — Chris Voss, Professor of Operations and Technology Management, London Business School "...comprehensive and comprehensible compendium on the management of innovation. It is very well organized and very well presented. A pedagogic tool that will work at multiple levels for those wishing to gain deeper insights into some of the most challenging and important management issues of the day." — David J. Teece, Thomas W. Tusher Professor in Global Business, Haas School of Business, University of California, Berkeley, USA "Those of us who teach in the field of Innovation Management were delighted when the first edition of this book appeared 11 years ago. The field had long been in need of such a comprehensive and integrated empirically–based work. The fact that this is now the 4th edition is clear testimony to the value of its contribution. We are deeply indebted to the authors for their dedication and diligence in providing us with this updated and expanded volume." — Thomas J. Allen,Howard W. Johnson Professor of Management, MIT Sloan School of Management, USA.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Readers may have noted that a short but very important announcement was made in the last issue of CLAE, at the top of the contents page. CLAE has been accepted by Thomson Reuters for abstracting and indexing in its SciSearch, Journal Citation Reports, and Current Contents services. This will ensure a greater visibility to the international research community. In addition, in June 2012 CLAE will receive its very first official Impact Factor – a measure of journal influence of importance to authors and readers alike. The impact factor value has not yet been decided but internal estimates by Elsevier estimate it will be around 1, and it will be applied to all CLAE issue back to January 2009 (volume 32). I would guess readers at this stage would have one of two responses – either ‘that's good news’ or perhaps ‘what's an impact factor?’ If you are in the latter camp then allow me to try and explain. Basically the impact factor or citation index of a journal is based on how many times in the previous year papers published in that journal in the previous two years were cited by authors publishing in other journals. So the 2012 impact factor for CLAE is calculated on how many times in 2011 papers that were published in CLAE in 2010 and 2009 were cited in other journals in 2011, divided by the number of papers published in CLAE 2010 and 2009. Essentially authors will try and get their work published in journals with a higher impact factor as it is thought that the paper will be cited more by other authors or the paper will have higher visibility in the arena. For universities having its published output in higher journals is one of the markers used to judge esteem. For individual authors publishing in journals with a higher impact factor or the number of times one of their papers is published is something that they are likely to add to their CVs or demonstrate the importance of their work. Journals with higher impact factors tend to be more review journals or journals with a wider spectrum so for a relatively small journal with a specialised field like CLAE it is great to be listed with a citation index. The awarding of a citation index crowns many changes that CLAE has undergone since the current Editor took the reins in 2005. CLAE has increased from four issues (in 2004) to six issues per year with at least one review article per issue and one article with continuing education per issue. The rejection rate has gone up significantly meaning that only best papers are published (currently it stands at 37%). CLAE has been Medline/Pubmed indexed for a few years now which is also a very important factor in improving visibility of the journal. The submission and reviewing process for CLAE in now entirely online and finally the editorial board has changed from being merely a list of keynote people to being an active group of keynote people who are enthusiastically involved with the journal. From the editorial board one person is appointed as a Reviews Editor plus we have two additional editors who work as Regional Editors. As ever, on behalf of CLAE I would like to thank the BCLA Council for their continued support (especially Vivien Freeman) and Elsevier for their continuing guidance (in particular Andrew Miller and Rosie Davey) and the excellent Editorial Board (Christopher Snyder, Pauline Cho, Eric Papas, Jan Bergmanson, Roger Buckley, Patrick Caroline, Dwight Cavanagh, Robin Chalmers, Michael Doughty, Nathan Efron, Michel Guillon, Nizar Hirji, Meng Lin, Florence Malet, Philip Morgan, Deborah Sweeney, Brian Tighe, Eef van Der Worp, Barry Weissman, Mark Willcox, James Wolffsohn and Craig Woods). And finally, a big thanks to the authors and reviewers who work tirelessly putting manuscripts together for publication in CLAE. Copyright © 2012 Published by Elsevier Ltd.