11 resultados para Experimental Literature
em Aston University Research Archive
Resumo:
A critical review of the auditory selective attention literature is presented, particular reference is made to methodological issues arising from the asymmetrical hemispheric representation of language in the context of the dominant research technique dichotic shadowing. Subsequently the concept of cerebral localization is introduced, and the experimental literature with reference to models of laterality effects in speech and audition discussed. The review indicated the importance of hemispheric asymmetries insofar as they might influence the results of dichotic shadowing tasks. It is suggested that there is a potential overlap between models of selective attention and hemispheric differences. In Experiment I, ~ a key experiment in auditory selective attention is replicated and by exercising control over possible laterality effects some of the conflicting results of earlier studies were reconciled. The three subsequent experiments, II, III and IV, are concerned with the recall of verbally shadowed inputs. A highly significant and consistent effect of ear of arrival upon the serial position of items recalled is reported. Experiment V is directed towards an analysis of the effect that the processing of unattended inputs has upon the serial position of attended items that are recalled. A significant effect of the type of unattended material upon the recall of attended items was found to be influenced by the ear of arrival of inputs. In Experiment VI, differences between the two ears as attended and unattended input channels were clarified. Two main conclusions were drawn from this work. First, that the dichotic shadowing technique cannot control attention. Instead the task aprocessing both channels of dichotic inputs is unevenly shared bet\'reen the hemispheres as a function of the ear shadowed. Consequently, evidence for the processing of unattended information is considered in terms of constraints imposed by asymmetries in the functional organization of language, not in terms of a limited processing capacity model. The second conclusion to be drawn is that laterality differences can be effectively examined using the dichotic shadowing technique, a new model of laterality differences is proposed and discussed.
Resumo:
This paper estimates the implicit model, especially the roles of size asymmetries and firm numbers, used by the European Commission to identify mergers with coordinated effects. This subset of cases offers an opportunity to shed empirical light on the conditions where a Competition Authority believes tacit collusion is most likely to arise. We find that, for the Commission, tacit collusion is a rare phenomenon, largely confined to markets of two, more or less symmetric, players. This is consistent with recent experimental literature, but contrasts with the facts on ‘hard-core’ collusion in which firm numbers and asymmetries are often much larger.
Resumo:
It is conventional wisdom that collusion is more likely the fewer firms there are in a market and the more symmetric they are. This is often theoretically justified in terms of a repeated non-cooperative game. Although that model fits more easily with tacit than overt collusion, the impression sometimes given is that ‘one model fits all’. Moreover, the empirical literature offers few stylized facts on the most simple of questions—how few are few and how symmetric is symmetric? This paper attempts to fill this gap while also exploring the interface of tacit and overt collusion, albeit in an indirect way. First, it identifies the empirical model of tacit collusion that the European Commission appears to have employed in coordinated effects merger cases—apparently only fairly symmetric duopolies fit the bill. Second, it shows that, intriguingly, the same story emerges from the quite different experimental literature on tacit collusion. This offers a stark contrast with the findings for a sample of prosecuted cartels; on average, these involve six members (often more) and size asymmetries among members are often considerable. The indirect nature of this ‘evidence’ cautions against definitive conclusions; nevertheless, the contrast offers little comfort for those who believe that the same model does, more or less, fit all.
Resumo:
This is a review of studies that have investigated the proposed rehabilitative benefit of tinted lenses and filters for people with low vision. Currently, eye care practitioners have to rely on marketing literature and anecdotal reports from users when making recommendations for tinted lens or filter use in low vision. Our main aim was to locate a prescribing protocol that was scientifically based and could assist low vision specialists with tinted lens prescribing decisions. We also wanted to determine if previous work had found any tinted lens/task or tinted lens/ocular condition relationships, i.e. were certain tints or filters of use for specific tasks or for specific eye conditions. Another aim was to provide a review of previous research in order to stimulate new work using modern experimental designs. Past studies of tinted lenses and low vision have assessed effects on visual acuity (VA), grating acuity, contrast sensitivity (CS), visual field, adaptation time, glare, photophobia and TV viewing. Objective and subjective outcome measures have been used. However, very little objective evidence has been provided to support anecdotal reports of improvements in visual performance. Many studies are flawed in that they lack controls for investigator bias, and placebo, learning and fatigue effects. Therefore, the use of tinted lenses in low vision remains controversial and eye care practitioners will have to continue to rely on anecdotal evidence to assist them in their prescribing decisions. Suggestions for future research, avoiding some of these experimental shortcomings, are made. © 2002 The College of Optometrists.
Resumo:
The question of how to develop leaders so that they are more effective in a variety of situations, roles and levels has inspired a voluminous amount of research. While leader development programs such as executive coaching and 360-degree feedback have been widely practiced to meet this demand within organisations, the research in this area has only scratched the surface. Drawing from the past literature and leadership practices, the current research conceptualised self-regulation, as a metacompetency that would assist leaders to further develop the specific competencies needed to perform effectively in their leadership role, leading to an increased rating of leader effectiveness and to enhanced group performance. To test this conceptualisation, a longitudinal field experimental study was conducted across ten months with a pre- and two post-test intervention designs with a matched control group. This longitudinal field experimental compared the difference in leader and team performance after receiving self-regulation intervention that was delivered by an executive coach. Leaders in experimental group also received feedback reports from 360-degree feedback at each stage. Participants were 40 leaders, 155 followers and 8 supervisors. Leaders’ performance was measured using a multi-source perceptual measure of leader performance and objective measures of team financial and assessment performance. Analyses using repeated measure of ANCOVA on pre-test and two post-tests responses showed a significant difference between leader and team performance between experimental and control group. Furthermore, leader competencies mediated the relationship between self-regulation and performance. The implications of these findings for the theory and practice of leadership development training programs and the impact on organisational performance are discussed.
Resumo:
A detailed literature survey confirmed cold roll-forming to be a complex and little understood process. In spite of its growing value, the process remains largely un-automated with few principles used in set-up of the rolling mill. This work concentrates on experimental investigations of operating conditions in order to gain a scientific understanding of the process. The operating conditions are; inter-pass distance, roll load, roll speed, horizontal roll alignment. Fifty tests have been carried out under varied operating conditions, measuring section quality and longitudinal straining to give a picture of bending. A channel section was chosen for its simplicity and compatibility with previous work. Quality measurements were measured in terms of vertical bow, twist and cross-sectional geometric accuracy, and a complete method of classifying quality has been devised. The longitudinal strain profile was recorded, by the use of strain gauges attached to the strip surface at five locations. Parameter control is shown to be important in allowing consistency in section quality. At present rolling mills are constructed with large tolerances on operating conditions. By reduction of the variability in parameters, section consistency is maintained and mill down-time is reduced. Roll load, alignment and differential roll speed are all shown to affect quality, and can be used to control quality. Set-up time is reduced by improving the design of the mill so that parameter values can be measured and set, without the need for judgment by eye. Values of parameters can be guided by models of the process, although elements of experience are still unavoidable. Despite increased parameter control, section quality is variable, if only due to variability in strip material properties. Parameters must therefore be changed during rolling. Ideally this can take place by closed-loop feedback control. Future work lies in overcoming the problems connected with this control.
Resumo:
The overall aim of this study was to examine experimentally the effects of noise upon short-term memory tasks in the hope of shedding further light upon the apparently inconsistent results of previous research in the area. Seven experiments are presented. The first chapter of the thesis comprised a comprehensive review of the literature on noise and human performance while in the second chapter some theoretical questions concerning the effects of noise were considered in more detail follovred by a more detailed examination of the effects of noise upon memory. Chapter 3 described an experiment which examined the effects of noise on attention allocation in short-term memory as a function of list length. The results provided only weak evidence of increased selectivity in noise. In further chapters no~effects Here investigated in conjunction vrith various parameters of short-term memory tasks e.g. the retention interval, presentation rate. The results suggested that noise effects were significantly affected by the length of the retention interval but not by the rate of presentation. Later chapters examined the possibility of differential noise effects on the mode of recall (recall v. recognition) and the type of presentation (sequential v. simultaneous) as well as an investigation of the effect of varying the point of introduction of the noise and the importance of individual differences in noise research. The results of this study were consistent with the hypothesis that noise at presentation facilitates phonemic coding. However, noise during recall appeared to affect the retrieval strategy adopted by the subject.
Resumo:
During the last decade, biomedicine has witnessed a tremendous development. Large amounts of experimental and computational biomedical data have been generated along with new discoveries, which are accompanied by an exponential increase in the number of biomedical publications describing these discoveries. In the meantime, there has been a great interest with scientific communities in text mining tools to find knowledge such as protein-protein interactions, which is most relevant and useful for specific analysis tasks. This paper provides a outline of the various information extraction methods in biomedical domain, especially for discovery of protein-protein interactions. It surveys methodologies involved in plain texts analyzing and processing, categorizes current work in biomedical information extraction, and provides examples of these methods. Challenges in the field are also presented and possible solutions are discussed.
Resumo:
Despite the large body of research regarding the role of memory in OCD, the results are described as mixed at best (Hermans et al., 2008). For example, inconsistent findings have been reported with respect to basic capacity, intact verbal, and generally affected visuospatial memory. We suggest that this is due to the traditional pursuit of OCD memory impairment as one of the general capacity and/or domain specificity (visuospatial vs. verbal). In contrast, we conclude from our experiments (i.e., Harkin & Kessler, 2009, 2011; Harkin, Rutherford, & Kessler, 2011) and recent literature (e.g., Greisberg & McKay, 2003) that OCD memory impairment is secondary to executive dysfunction, and more specifically we identify three common factors (EBL: Executive-functioning efficiency, Binding complexity, and memory Load) that we generalize to 58 experimental findings from 46 OCD memory studies. As a result we explain otherwise inconsistent research – e.g., intact vs. deficient verbal memory – that are difficult to reconcile within a capacity or domain specific perspective. We conclude by discussing the relationship between our account and others', which in most cases is complementary rather than contradictory.
Resumo:
External metrology systems are increasingly being integrated with traditional industrial articulated robots, especially in the aerospace industries, to improve their absolute accuracy for precision operations such as drilling, machining and jigless assembly. While currently most of the metrology assisted robotics control systems are limited in their position update rate, such that the robot has to be stopped in order to receive a metrology coordinate update, some recent efforts are addressed toward controlling robots using real-time metrology data. The indoor GPS is one of the metrology systems that may be used to provide real-time 6DOF data to a robot controller. Even if there is a noteworthy literature dealing with the evaluation of iGPS performance, there is, however, a lack of literature on how well the iGPS performs under dynamic conditions. This paper presents an experimental evaluation of the dynamic measurement performance of the iGPS, tracking the trajectories of an industrial robot. The same experiment is also repeated using a laser tracker. Besides the experiment results presented, this paper also proposes a novel method for dynamic repeatability comparisons of tracking instruments. © 2011 Springer-Verlag London Limited.
Resumo:
It is important to help researchers find valuable papers from a large literature collection. To this end, many graph-based ranking algorithms have been proposed. However, most of these algorithms suffer from the problem of ranking bias. Ranking bias hurts the usefulness of a ranking algorithm because it returns a ranking list with an undesirable time distribution. This paper is a focused study on how to alleviate ranking bias by leveraging the heterogeneous network structure of the literature collection. We propose a new graph-based ranking algorithm, MutualRank, that integrates mutual reinforcement relationships among networks of papers, researchers, and venues to achieve a more synthetic, accurate, and less-biased ranking than previous methods. MutualRank provides a unified model that involves both intra- and inter-network information for ranking papers, researchers, and venues simultaneously. We use the ACL Anthology Network as the benchmark data set and construct the gold standard from computer linguistics course websites of well-known universities and two well-known textbooks. The experimental results show that MutualRank greatly outperforms the state-of-the-art competitors, including PageRank, HITS, CoRank, Future Rank, and P-Rank, in ranking papers in both improving ranking effectiveness and alleviating ranking bias. Rankings of researchers and venues by MutualRank are also quite reasonable.