24 resultados para Gains in selection
em Aston University Research Archive
Resumo:
This thesis objective is to discover “How are informal decisions reached by screeners when filtering out undesirable job applications?” Grounded theory techniques were employed in the field to observe and analyse informal decisions at the source by screeners in three distinct empirical studies. Whilst grounded theory provided the method for case and cross-case analysis, literature from academic and non-academic sources was evaluated and integrated to strengthen this research and create a foundation for understanding informal decisions. As informal decisions in early hiring processes have been under researched, this thesis contributes to current knowledge in several ways. First, it locates the Cycle of Employment which enhances Robertson and Smith’s (1993) Selection Paradigm through the integration of stages that individuals occupy whilst seeking employment. Secondly, a general depiction of the Workflow of General Hiring Processes provides a template for practitioners to map and further develop their organisational processes. Finally, it highlights the emergence of the Locality Effect, which is a geographically driven heuristic and bias that can significantly impact recruitment and informal decisions. Although screeners make informal decisions using multiple variables, informal decisions are made in stages as evidence in the Cycle of Employment. Moreover, informal decisions can be erroneous as a result of a majority and minority influence, the weighting of information, the injection of inappropriate information and criteria, and the influence of an assessor. This thesis considers these faults and develops a basic framework of understanding informal decisions to which future research can be launched.
Resumo:
Purpose – The purpose of this paper is to challenge the assumption that process losses of individuals working in teams are unavoidable. The paper aims to challenge this assumption on the basis of social identity theory and recent research. Design/methodology/approach – The approach adopted in this paper is to review the mainstream literature providing strong evidence for motivation problems of individuals working in groups. Based on more recent literature, innovative ways to overcome these problems are discussed. Findings – A social identity-based analysis and recent findings summarized in this paper show that social loafing can be overcome and that even motivation gains in group work can be expected when groups are important for the individual group members' self-concepts. Practical implications – The paper provides human resource professionals and front-line managers with suggestions as to how individual motivation and performance might be increased when working in teams. Originality/value – The paper contributes to the literature by challenging the existing approach to reducing social loafing, i.e. individualizing workers as much as possible, and proposes a team-based approach instead to overcome motivation problems.
Resumo:
l, This report presents the findings of a study of individual personalities of Naval Officers, Chief Petty Officers and Petty Officers serving in different environments within the Ministry of Defence and the Fleet. This sample was used to establish norms for the Cattell 16 PF Questionnaire, and these are compared with other occupational norms discussed in the literature. 2. The results obtained on psychometric measures were related to other data collected about the work and the formal organisation. This was in its turn related to problems facing the Navy because of changes in technology which have occurred or which are now taking place and are expected to make an impact in the future. 3. A need is recognised for a way of simulating the effects of proposed changes within the manpower field of the Royal Navy and a simulation model is put forward and discussed. 4. The use of psychometric measures in selection for entry and for special tasks is examined, Particular reference is made to problems of group formation in the context of leadership in a technical environment. 5. The control of the introduction of change is discussed in the recognition that people represent an increasingly important resource which is critical to the continuing life of the total organisation. 6. Conclusions are drawn from the various strands of the research and recommendations are made both for line management and for subsequent research programmes.
Resumo:
Manufacturing firms are driven by competitive pressures to continually improve the effectiveness and efficiency of their organisations. For this reason, manufacturing engineers often implement changes to existing processes, or design new production facilities, with the expectation of making further gains in manufacturing system performance. This thesis relates to how the likely outcome of this type of decision should be predicted prior to its implementation. The thesis argues that since manufacturing systems must also interact with many other parts of an organisation, the expected performance improvements can often be significantly hampered by constraints that arise elsewhere in the business. As a result, decision-makers should attempt to predict just how well a proposed design will perform when these other factors, or 'support departments', are taken into consideration. However, the thesis also demonstrates that, in practice, where quantitative analysis is used to evaluate design decisions, the analysis model invariably ignores the potential impact of support functions on a system's overall performance. A more comprehensive modelling approach is therefore required. A study of how various business functions interact establishes that to properly represent the kind of delays that give rise to support department constraints, a model should actually portray the dynamic and stochastic behaviour of entities in both the manufacturing and non-manufacturing aspects of a business. This implies that computer simulation be used to model design decisions but current simulation software does not provide a sufficient range of functionality to enable the behaviour of all of these entities to be represented in this way. The main objective of the research has therefore been the development of a new simulator that will overcome limitations of existing software and so enable decision-makers to conduct a more holistic evaluation of design decisions. It is argued that the application of object-oriented techniques offers a potentially better way of fulfilling both the functional and ease-of-use issues relating to development of the new simulator. An object-oriented analysis and design of the system, called WBS/Office, are therefore presented that extends to modelling a firm's administrative and other support activities in the context of the manufacturing system design process. A particularly novel feature of the design is the ability for decision-makers to model how a firm's specific information and document processing requirements might hamper shop-floor performance. The simulator is primarily intended for modelling make-to-order batch manufacturing systems and the thesis presents example models created using a working version of WBS/Office that demonstrate the feasibility of using the system to analyse manufacturing system designs in this way.
Resumo:
Lock-in is observed in real world markets of experience goods; experience goods are goods whose characteristics are difficult to determine in advance, but ascertained upon consumption. We create an agent-based simulation of consumers choosing between two experience goods available in a virtual market. We model consumers in a grid representing the spatial network of the consumers. Utilising simple assumptions, including identical distributions of product experience and consumers having a degree of follower tendency, we explore the dynamics of the model through simulations. We conduct simulations to create a lock-in before testing several hypotheses upon how to break an existing lock-in; these include the effect of advertising and free give-away. Our experiments show that the key to successfully breaking a lock-in required the creation of regions in a consumer population. Regions arise due to the degree of local conformity between agents within the regions, which spread throughout the population when a mildly superior competitor was available. These regions may be likened to a niche in a market, which gains in popularity to transition into the mainstream.
Resumo:
After the ten Regional Water Authorities (RWAs) of England and Wales were privatized in November 1989, the successor Water and Sewerage Companies (WASCs) faced a new regulatory regime that was designed to promote economic efficiency while simultaneously improving drinking water and environmental quality. As legally mandated quality improvements necessitated a costly capital investment programme, the industry's economic regulator, the Office of Water Services (Ofwat), implemented a retail price index (RPI)+K pricing system, which was designed to compensate the WASCs for their capital investment programme while also encouraging gains in economic efficiency. In order to analyse jointly the impact of privatization, as well as the impact of increasingly stringent economic and environmental regulation on the WASCs' economic performance, this paper estimates a translog multiple output cost function model for the period 1985–1999. Given the significant costs associated with water quality improvements, the model is augmented to include the impact of drinking water quality and environmental quality on total costs. The model is then employed to determine the extent of scale and scope economies in the water and sewerage industry, as well as the impact of privatization and economic regulation on economic efficiency.
Resumo:
The search by many investigators for a solution to the reading problems encountered by individuals with no central vision has been long and, to date, not very fruitful. Most textual manipulations, including font size, have led to only modest gains in reading speed. Previous work on spatial integrative properties of peripheral retina suggests that 'visual crowding' may be a major factor contributing to inefficient reading. Crowding refers to the fact that juxtaposed targets viewed eccentrically may be difficult to identify. The purpose of this study was to assess the combined effects of line spacing and word spacing on the ability of individuals with age-related macular degeneration (ARMD) to read short passages of text that were printed with either high (87.5%) or low contrast (17.5%) letters. Low contrast text was used to avoid potential ceiling effects and to mimic a possible reduction in letter contrast with light scatter from media opacities. For both low and high contrast text, the fastest reading speeds we measured were for passages of text with double line and double word spacing. In comparison with standard single spacing, double word/line spacing increased reading speed by approximately 26% with high contrast text (p < 0.001), and by 46% with low contrast text (p < 0.001). In addition, double line/word spacing more than halved the number of reading errors obtained with single spaced text. We compare our results with previous reading studies on ARMD patients, and conclude that crowding is detrimental to reading and that its effects can be reduced with enhanced text spacing. Spacing is particularly important when the contrast of the text is reduced, as may occur with intraocular light scatter or poor viewing conditions. We recommend that macular disease patients should employ double line spacing and double-character word spacing to maximize their reading efficiency. © 2013 Blackmore-Wright et al.
Resumo:
Abstract: Loss of central vision caused by age-related macular degeneration (AMD) is a problem affecting increasingly large numbers of people within the ageing population. AMD is the leading cause of blindness in the developed world, with estimates of over 600,000 people affected in the UK . Central vision loss can be devastating for the sufferer, with vision loss impacting on the ability to carry out daily activities. In particular, inability to read is linked to higher rates of depression in AMD sufferers compared to age-matched controls. Methods to improve reading ability in the presence of central vision loss will help maintain independence and quality of life for those affected. Various attempts to improve reading with central vision loss have been made. Most textual manipulations, including font size, have led to only modest gains in reading speed. Previous experimental work and theoretical arguments on spatial integrative properties of the peripheral retina suggest that ‘visual crowding’ may be a major factor contributing to inefficient reading. Crowding refers to the phenomena in which juxtaposed targets viewed eccentrically may be difficult to identify. Manipulating text spacing of reading material may be a simple method that reduces crowding and benefits reading ability in macular disease patients. In this thesis the effect of textual manipulation on reading speed was investigated, firstly for normally sighted observers using eccentric viewing, and secondly for observers with central vision loss. Test stimuli mimicked normal reading conditions by using whole sentences that required normal saccadic eye movements and observer comprehension. Preliminary measures on normally-sighted observers (n = 2) used forced-choice procedures in conjunction with the method of constant stimuli. Psychometric functions relating the proportion of correct responses to exposure time were determined for text size, font type (Lucida Sans and Times New Roman) and text spacing, with threshold exposure time (75% correct responses) used as a measure of reading performance. The results of these initial measures were used to derive an appropriate search space, in terms of text spacing, for assessing reading performance in AMD patients. The main clinical measures were completed on a group of macular disease sufferers (n=24). Firstly, high and low contrast reading acuity and critical print size were measured using modified MNREAD test charts, and secondly, the effect of word and line spacing was investigated using a new test, designed specifically for this study, called the Equal Readability Passages (ERP) test. The results from normally-sighted observers were in close agreement with those from the group of macular disease sufferers. Results show that: (i) optimum reading performance was achieved when using both double line and double word spacing; (ii) the effect of line spacing was greater than the effect of word spacing (iii) a text size of approximately 0.85o is sufficiently large for reading at 5o eccentricity. In conclusion, the results suggest that crowding is detrimental to reading with peripheral vision, and its effects can be minimized with a modest increase in text spacing.
Resumo:
For more than a century it has been known that the eye is not a perfect optical system, but rather a system that suffers from aberrations beyond conventional prescriptive descriptions of defocus and astigmatism. Whereas traditional refraction attempts to describe the error of the eye with only two parameters, namely sphere and cylinder, measurements of wavefront aberrations depict the optical error with many more parameters. What remains questionable is the impact these additional parameters have on visual function. Some authors have argued that higher-order aberrations have a considerable effect on visual function and in certain cases this effect is significant enough to induce amblyopia. This has been referred to as ‘higher-order aberration-associated amblyopia’. In such cases, correction of higher-order aberrations would not restore visual function. Others have reported that patients with binocular asymmetric aberrations display an associated unilateral decrease in visual acuity and, if the decline in acuity results from the aberrations alone, such subjects may have been erroneously diagnosed as amblyopes. In these cases, correction of higher-order aberrations would restore visual function. This refractive entity has been termed ‘aberropia’. In order to investigate these hypotheses, the distribution of higher-order aberrations in strabismic, anisometropic and idiopathic amblyopes, and in a group of visual normals, was analysed both before and after wavefront-guided laser refractive correction. The results show: (i) there is no significant asymmetry in higher-order aberrations between amblyopic and fixing eyes prior to laser refractive treatment; (ii) the mean magnitude of higher-order aberrations is similar within the amblyopic and visually normal populations; (iii) a significant improvement in visual acuity can be realised for adult amblyopic patients utilising wavefront-guided laser refractive surgery and a modest increase in contrast sensitivity was observed for the amblyopic eye of anisometropes following treatment (iv) an overall trend towards increased higher-order aberrations following wavefront-guided laser refractive treatment was observed for both visually normal and amblyopic eyes. In conclusion, while the data do not provide any direct evidence for the concepts of either ‘aberropia’ or ‘higher-order aberration-associated amblyopia’, it is clear that gains in visual acuity and contrast sensitivity may be realised following laser refractive treatment of the amblyopic adult eye. Possible mechanisms by which these gains are realised are discussed.
Resumo:
This paper develops a theoretical analysis of the tradeoff between carrier suppression and nonlinearities induced by optical IQ modulators in direct-detection subcarrier multiplexing systems. The tradeoff is obtained by examining the influence of the bias conditions of the modulator on the transmitted single side band signal. The frequency components in the electric field and the associated photocurrent at the output of the IQ modulator are derived mathematically. For any frequency plan, the optimum bias point can be identified by calculating the sensitivity gain for every subchannel. A setup composed of subcarriers located at multiples of the data rate ensures that the effects of intermodulation distortion are studied in the most suitable conditions. Experimental tests with up to five QPSK electrical subchannels are performed to verify the mathematical model and validate the predicted gains in sensitivity.
Resumo:
Dynamic asset rating is one of a number of techniques that could be used to facilitate low carbon electricity network operation. This paper focusses on distribution level transformer dynamic rating under this context. The models available for use with dynamic asset rating are discussed and compared using measured load and weather conditions from a trial Network area within Milton Keynes. The paper then uses the most appropriate model to investigate, through simulation, the potential gains in dynamic rating compared to static rating under two transformer cooling methods to understand the potential gain to the Network Operator.
Resumo:
In this paper, we demonstrate through computer simulation and experiment a novel subcarrier coding scheme combined with pre-electrical dispersion compensation (pre-EDC) for fiber nonlinearity mitigation in coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems. As the frequency spacing in CO-OFDM systems is usually small (tens of MHz), neighbouring subcarriers tend to experience correlated nonlinear distortions after propagation over a fiber link. As a consequence, nonlinearity mitigation can be achieved by encoding and processing neighbouring OFDM subcarriers simultaneously. Herein, we propose to adopt the concept of dual phase conjugated twin wave for CO-OFDM transmission. Simulation and experimental results show that this simple technique combined with 50% pre-EDC can effectively offer up to 1.5 and 0.8 dB performance gains in CO-OFDM systems with BPSK and QPSK modulation formats, respectively.
Resumo:
Privately owned water utilities typically operate under a regulated monopoly regime. Price-cap regulation has been introduced as a means to enhance efficiency and innovation. The main objective of this paper is to propose a methodology for measuring productivity change across companies and over time when the sample size is limited. An empirical application is developed for the UK water and sewerage companies (WaSCs) for the period 1991-2008. A panel index approach is applied to decompose and derive unit-specific productivity growth as a function of the productivity growth achieved by benchmark firms, and the catch-up to the benchmark firm achieved by less productive firms. The results indicated that significant gains in productivity occurred after 2000, when the regulator set tighter reviews. However, the average WaSC still must improve towards the benchmarking firm by 2.69% over a period of five years to achieve comparable performance. This study is relevant to regulators who are interested in developing comparative performance measurement when the number of water companies that can be evaluated is limited. Moreover, setting an appropriate X factor is essential to improve the efficiency of water companies and this study helps to achieve this challenge.
Resumo:
Ad hoc wireless sensor networks (WSNs) are formed from self-organising configurations of distributed, energy constrained, autonomous sensor nodes. The service lifetime of such sensor nodes depends on the power supply and the energy consumption, which is typically dominated by the communication subsystem. One of the key challenges in unlocking the potential of such data gathering sensor networks is conserving energy so as to maximize their post deployment active lifetime. This thesis described the research carried on the continual development of the novel energy efficient Optimised grids algorithm that increases the WSNs lifetime and improves on the QoS parameters yielding higher throughput, lower latency and jitter for next generation of WSNs. Based on the range and traffic relationship the novel Optimised grids algorithm provides a robust traffic dependent energy efficient grid size that minimises the cluster head energy consumption in each grid and balances the energy use throughout the network. Efficient spatial reusability allows the novel Optimised grids algorithm improves on network QoS parameters. The most important advantage of this model is that it can be applied to all one and two dimensional traffic scenarios where the traffic load may fluctuate due to sensor activities. During traffic fluctuations the novel Optimised grids algorithm can be used to re-optimise the wireless sensor network to bring further benefits in energy reduction and improvement in QoS parameters. As the idle energy becomes dominant at lower traffic loads, the new Sleep Optimised grids model incorporates the sleep energy and idle energy duty cycles that can be implemented to achieve further network lifetime gains in all wireless sensor network models. Another key advantage of the novel Optimised grids algorithm is that it can be implemented with existing energy saving protocols like GAF, LEACH, SMAC and TMAC to further enhance the network lifetimes and improve on QoS parameters. The novel Optimised grids algorithm does not interfere with these protocols, but creates an overlay to optimise the grids sizes and hence transmission range of wireless sensor nodes.