890 resultados para Human Performance
Resumo:
Whole life costing (WLC) has become the best practice in construction procurement and it is likely to be a major issue in predicting whole life costs of a construction project accurately. However, different expectations from different organizations throughout a project's life and the lack of data, monitoring targets, and long-term interest for many key players are obstacles to be overcome if WLC is to be implemented. A questionnaire survey was undertaken to investigate a set of ten common factors and 188 individual factors. These were grouped into eight critical categories (project scope, time, cost, quality, contract/administration, human resource, risk, and health and safety) by project phase, as perceived by the clients, contractors and subcontractors in order to identify critical success factors for whole life performance assessment (WLPA). Using a relative importance index, the top ten critical factors for each category, from the perspective of project participants, were analyzed and ranked. Their agreement on those categories and factors were analyzed using Spearman's rank correlation. All participants identify “Type of Project” as the most common critical factor in the eight categories for WLPA. Using the relative index ranking technique and weighted average methods, it was found that the most critical individual factors in each category were: “clarity of contract” (scope); “fixed construction period” (time); “precise project budget estimate” (cost); “material quality” (quality); “mutual/trusting relationships” (contract/administration); “leadership/team management” (human resource); and “management of work safety on site” (health and safety). There was relatively a high agreement on these categories among all participants. Obviously, with 80 critical factors of WLPA, there is a stronger positive relationship between client and contactor rather than contractor and subcontractor, client and subcontractor. Putting these critical factors into a criteria matrix can facilitate an initial framework of WLPA in order to aid decision making in the public sector in South Korea for evaluation/selection process of a construction project at the bid stage.
Resumo:
This research was concerned with identifying factors which may influence human reliability within chemical process plants - these factors are referred to as Performance Shaping Factors (PSFs). Following a period of familiarization within the industry, a number of case studies were undertaken covering a range of basic influencing factors. Plant records and site `lost time incident reports' were also used as supporting evidence for identifying and classifying PSFs. In parallel to the investigative research, the available literature appertaining to human reliability assessment and PSFs was considered in relation to the chemical process plan environment. As a direct result of this work, a PSF classification structure has been produced with an accompanying detailed listing. Phase two of the research considered the identification of important individual PSFs for specific situations. Based on the experience and data gained during phase one, it emerged that certain generic features of a task influenced PSF relevance. This led to the establishment of a finite set of generic task groups and response types. Similarly, certain PSFs influence some human errors more than others. The result was a set of error type key words, plus the identification and classification of error causes with their underlying error mechanisms. By linking all these aspects together, a comprehensive methodology has been forwarded as the basis of a computerized aid for system designers. To recapitulate, the major results of this research have been: One, the development of a comprehensive PSF listing specifically for the chemical process industries with a classification structure that facilitates future updates; and two, a model of identifying relevant SPFs and their order of priority. Future requirements are the evaluation of the PSF listing and the identification method. The latter must be considered both in terms of `useability' and its success as a design enhancer, in terms of an observable reduction in important human errors.
Resumo:
This thesis addresses the viability of automatic speech recognition for control room systems; with careful system design, automatic speech recognition (ASR) devices can be useful means for human computer interaction in specific types of task. These tasks can be defined as complex verbal activities, such as command and control, and can be paired with spatial tasks, such as monitoring, without detriment. It is suggested that ASR use be confined to routine plant operation, as opposed the critical incidents, due to possible problems of stress on the operators' speech. It is proposed that using ASR will require operators to adapt a commonly used skill to cater for a novel use of speech. Before using the ASR device, new operators will require some form of training. It is shown that a demonstration by an experienced user of the device can lead to superior performance than instructions. Thus, a relatively cheap and very efficient form of operator training can be supplied by demonstration by experienced ASR operators. From a series of studies into speech based interaction with computers, it is concluded that the interaction be designed to capitalise upon the tendency of operators to use short, succinct, task specific styles of speech. From studies comparing different types of feedback, it is concluded that operators be given screen based feedback, rather than auditory feedback, for control room operation. Feedback will take two forms: the use of the ASR device will require recognition feedback, which will be best supplied using text; the performance of a process control task will require task feedback integrated into the mimic display. This latter feedback can be either textual or symbolic, but it is suggested that symbolic feedback will be more beneficial. Related to both interaction style and feedback is the issue of handling recognition errors. These should be corrected by simple command repetition practices, rather than use error handling dialogues. This method of error correction is held to be non intrusive to primary command and control operations. This thesis also addresses some of the problems of user error in ASR use, and provides a number of recommendations for its reduction.
Resumo:
This thesis initially presents an 'assay' of the literature pertaining to individual differences in human-computer interaction. A series of experiments is then reported, designed to investigate the association between a variety of individual characteristics and various computer task and interface factors. Predictor variables included age, computer expertise, and psychometric tests of spatial visualisation, spatial memory, logical reasoning, associative memory, and verbal ability. These were studied in relation to a variety of computer-based tacks, including: (1) word processing and its component elements; (ii) the location of target words within passages of text; (iii) the navigation of networks and menus; (iv) command generation using menus and command line interfaces; (v) the search and selection of icons and text labels; (vi) information retrieval. A measure of self-report workload was also included in several of these experiments. The main experimental findings included: (i) an interaction between spatial ability and the manipulation of semantic but not spatial interface content; (ii) verbal ability being only predictive of certain task components of word processing; (iii) age differences in word processing and information retrieval speed but not accuracy; (iv) evidence of compensatory strategies being employed by older subjects; (v) evidence of performance strategy differences which disadvantaged high spatial subjects in conditions of low spatial information content; (vi) interactive effects of associative memory, expertise and command strategy; (vii) an association between logical reasoning and word processing but not information retrieval; (viii) an interaction between expertise and cognitive demand; and (ix) a stronger association between cognitive ability and novice performance than expert performance.
Resumo:
New Technology Based Firms (NTBF) are considered to be important for the economic development of a country in regards to both employment growth and innovative activity. The latter is believed to contribute significantly to the increase in productivity and therefore the competitiveness of UK’s economy. This study contributes to the above literature by investigating two of the factors believed to limit the growth of such firms in the UK. The first concerns the existence of a ‘knowledge gap’ while the second the existence of a ‘financial gap’. These themes are developed along three main research lines. Firstly, based upon the human capital theory initially proposed by Backer (1964) new evidence is provided on the human capital characteristics (experience and education) of the current UK NTBF entrepreneurs. Secondly, the causal relationship between general and specific human capital (as well as their interactions) upon the company performance and growth is investigated via its traditional direct effect as well as via its indirect effect upon the access to external finance. Finally, more light is shed on the financial structure and the type of financial constraints that high-tech firms face at start-up. In particular, whether a financial gap exists is explored by distinguishing between the demand and the supply of external finance as well as by type of external source of financing. The empirical testing of the various research hypotheses has been obtained by carrying out an original survey of new technology based firms defined as independent companies, established in the past 25 years in R&D intensive sectors. The resulting dataset contains information for 412 companies on a number of general company characteristics and the characteristics of their entrepreneurs in 2004. Policy and practical implications for future and current entrepreneurs and also providers of external finance are provided.
Resumo:
This thesis investigates how people select items from a computer display using the mouse input device. The term computer mouse refers to a class of input devices which share certain features, but these may have different characteristics which influence the ways in which people use the device. Although task completion time is one of the most commonly used performance measures for input device evaluation, there is no consensus as to its definition. Furthermore most mouse studies fail to provide adequate assurances regarding its correct measurement.Therefore precise and accurate timing software were developed which permitted the recording of movement data which by means of automated analysis yielded the device movements made. Input system gain, an important task parameter, has been poorly defined and misconceptualized in most previous studies. The issue of gain has been clarified and investigated within this thesis. Movement characteristics varied between users and within users, even for the same task conditions. The variables of target size, movement amplitude, and experience exerted significant effects on performance. Subjects consistently undershot the target area. This may be a consequence of the particular task demands. Although task completion times indicated that mouse performance had stabilized after 132 trials the movement traces, even of very experienced users, indicated that there was still considerable room for improvement in performance, as indicated by the proportion of poorly made movements. The mouse input device was suitable for older novice device users, but they took longer to complete the experimental trials. Given the diversity and inconsistency of device movements, even for the same task conditions, caution is urged when interpreting averaged grouped data. Performance was found to be sensitive to; task conditions, device implementations, and experience in ways which are problematic for the theoretical descriptions of device movement, and limit the generalizability of such findings within this thesis.
Resumo:
Over the past decade, several experienced Operational Researchers have advanced the view that the theoretical aspects of model building have raced ahead of the ability of people to use them. Consequently, the impact of Operational Research on commercial organisations and the public sector is limited, and many systems fail to achieve their anticipated benefits in full. The primary objective of this study is to examine a complex interactive Stock Control system, and identify the reasons for the differences between the theoretical expectations and the operational performance. The methodology used is to hypothesise all the possible factors which could cause a divergence between theory and practice, and to evaluate numerically the effect each of these factors has on two main control indices - Service Level and Average Stock Value. Both analytical and empirical methods are used, and simulation is employed extensively. The factors are divided into two main categories for analysis - theoretical imperfections in the model, and the usage of the system by Buyers. No evidence could be found in the literature of any previous attempts to place the differences between theory and practice in a system in quantitative perspective nor, more specifically, to study the effects of Buyer/computer interaction in a Stock Control system. The study reveals that, in general, the human factors influencing performance are of a much higher order of magnitude than the theoretical factors, thus providing objective evidence to support the original premise. The most important finding is that, by judicious intervention into an automatic stock control algorithm, it is possible for Buyers to produce results which not only attain but surpass the algorithmic predictions. However, the complexity and behavioural recalcitrance of these systems are such that an innately numerate, enquiring type of Buyer needs to be inducted to realise the performance potential of the overall man/computer system.
Resumo:
The need to measure the response of the oculomotor system, such as ocular accommodation, accurately and in real-world environments is essential. New instruments have been developed over the past 50 years to measure eye focus including the extensively utilised and well validated Canon R-1, but in general these have had limitations such as a closed field-of-view, a poor temporal resolution and the need for extensive instrumentation bulk preventing naturalistic performance of environmental tasks. The use of photoretinoscopy and more specifically the PowerRefractor was examined in this regard due to its remote nature, binocular measurement of accommodation, eye movement and pupil size and its open field-of-view. The accuracy of the PowerRefractor to measure refractive error was on averaging similar, but more variable than subjective refraction and previously validated instrumentation. The PowerRefractor was found to be tolerant to eye movements away from the visual axis, but could not function with small pupil sizes in brighter illumination. The PowerRefractor underestimated the lead of accommodation and overestimated the slope of the accommodation stimulus response curve. The PowerRefractor and the SRW-5000 were used to measure the oculomotor responses in a variety of real-world environment: spectacles compared to single vision contract lenses; the use of multifocal contact lenses by pre-presbyopes (relevant to studies on myopia retardation); and ‘accommodating’ intraocular lenses. Due to the accuracy concerns with the PowerRefractor, a purpose-built photoretinoscope was designed to measure the oculomotor response to a monocular head-mounted display. In conclusion, this thesis has shown the ability of photoretinoscopy to quantify changes in the oculomotor system. However there are some major limitations to the PowerRefractor, such as the need for individual calibration for accurate measures of accommodation and vergence, and the relatively large pupil size necessary for measurement.
Resumo:
This thesis investigates various aspects of peripheral vision, which is known not to be as acute as vision at the point of fixation. Differences between foveal and peripheral vision are generally thought to be of a quantitative rather than a qualitative nature. However, the rate of decline in sensitivity between foveal and peripheral vision is known to be task dependent and the mechanisms underlying the differences are not yet well understood. Several experiments described here have employed a psychophysical technique referred to as 'spatial scaling'. Thresholds are determined at several eccentricities for ranges of stimuli which are magnified versions of one another. Using this methodology a parameter called the E2 value is determined, which defines the eccentricity at which stimulus size must double in order to maintain performance equivalent to that at the fovea. Experiments of this type have evaluated the eccentricity dependencies of detection tasks (kinetic and static presentation of a differential light stimulus), resolution tasks (bar orientation discrimination in the presence of flanking stimuli, word recognition and reading performance), and relative localisation tasks (curvature detection and discrimination). Most tasks could be made equal across the visual field by appropriate magnification. E2 values are found to vary widely dependent on the task, and possible reasons for such variations are discussed. The dependence of positional acuity thresholds on stimulus eccentricity, separation and spatial scale parameters is also examined. The relevance of each factor in producing 'Weber's law' for position can be determined from the results.
Resumo:
This thesis first considers the calibration and signal processing requirements of a neuromagnetometer for the measurement of human visual function. Gradiometer calibration using straight wire grids is examined and optimal grid configurations determined, given realistic constructional tolerances. Simulations show that for gradiometer balance of 1:104 and wire spacing error of 0.25mm the achievable calibration accuracy of gain is 0.3%, of position is 0.3mm and of orientation is 0.6°. Practical results with a 19-channel 2nd-order gradiometer based system exceed this performance. The real-time application of adaptive reference noise cancellation filtering to running-average evoked response data is examined. In the steady state, the filter can be assumed to be driven by a non-stationary step input arising at epoch boundaries. Based on empirical measures of this driving step an optimal progression for the filter time constant is proposed which improves upon fixed time constant filter performance. The incorporation of the time-derivatives of the reference channels was found to improve the performance of the adaptive filtering algorithm by 15-20% for unaveraged data, falling to 5% with averaging. The thesis concludes with a neuromagnetic investigation of evoked cortical responses to chromatic and luminance grating stimuli. The global magnetic field power of evoked responses to the onset of sinusoidal gratings was shown to have distinct chromatic and luminance sensitive components. Analysis of the results, using a single equivalent current dipole model, shows that these components arise from activity within two distinct cortical locations. Co-registration of the resulting current source localisations with MRI shows a chromatically responsive area lying along the midline within the calcarine fissure, possibly extending onto the lingual and cuneal gyri. It is postulated that this area is the human homologue of the primate cortical area V4.
Resumo:
The observation that performance in many visual tasks can be made independent of eccentricity by increasing the size of peripheral stimuli according to the cortical magnification factor has dominated studies of peripheral vision for many years. However, it has become evident that the cortical magnification factor cannot be successfully applied to all tasks. To find out why, several tasks were studied using spatial scaling, a method which requires no pre-determined scaling factors (such as those predicted from cortical magnification) to magnify the stimulus at any eccentricity. Instead, thresholds are measured at the fovea and in the periphery using a series of stimuli, all of which are simply magnified versions of one another. Analysis of the data obtained in this way reveals the value of the parameter E2, the eccentricity at which foveal stimulus size must double in order to maintain performance equivalent to that at the fovea. The tasks investigated include hyperacuities (vernier acuity, bisection acuity, spatial interval discrimination, referenced displacement detection, and orientation discrimination), unreferenced instantaneous and gradual movement, flicker sensitivity, and face discrimination. In all cases tasks obeyed the principle of spatial scaling since performance in the periphery could be equated to that at the fovea by appropriate magnification. However, E2 values found for different spatial tasks varied over a 200-fold range. In spatial tasks (e.g. bisection acuity and spatial interval discrimination) E2 values were low, reaching about 0.075 deg, whereas in movement tasks the values could be as high as 16 deg. Using a method of spatial scaling it has been possible to equate foveal and peripheral perfonnance in many diverse visual tasks. The rate at which peripheral stimulus size had to be increased as a function of eccentricity was dependent upon the stimulus conditions and the task itself. Possible reasons for these findings are discussed.
Resumo:
Faultline theory suggests that negative effects of team diversity are better understood by considering the influence of different dimensions of diversity in conjunction, rather than for each dimension separately. We develop and extend the social categorization analysis that lies at the heart of faultline theory to identify a factor that attenuates the negative influence of faultlines: the extent to which the team has shared objectives. The hypothesized moderating role of shared objectives received support in a study of faultlines formed by differences in gender, tenure, and functional background in 42 top management teams. The focus on top management teams has the additional benefit of providing the first test of the relationship between diversity faultlines and objective indicators of organizational performance. We discuss how these findings, and the innovative way in which we operationalized faultlines, extend faultline theory and research as well as offer guidelines to manage diversity faultlines.
Resumo:
Cardiovascular disease (CVD) continues to be one of the top causes of mortality in the world. World Heart Organization (WHO) reported that in 2004, CVD contributed to almost 30% of death from estimated worldwide death figures of 58 million[1]. Heart failure treatment varies from lifestyle adjustment to heart transplantation; its aims are to reduce HF symptoms, prolong patient survival and minimize risk [2]. One alternative available in the market for HF treatment is Left Ventricular Assist Device (LVAD). Chronic Intermittent Mechanical Support (CIMS) device is a novel (LVAD) heart failure treatment using counterpulsation similar to Intra Aortic Balloon Pumps (IABP). However, the implantation site of the CIMS balloon is in the ascending aorta just distal to aortic valve contrasted with IABP in the descending aorta. Counterpulsation coupled with implantation close to the aortic valve enables comparable flow augmentation with reduced balloon volume. Two prototypes of the CIMS balloon were constructed using rapid prototyping: the straight-body model is a cylindrical tube with a silicone membrane lining with zero expansive compliance. The compliant-body model had a bulging structure that allowed the membrane to expand under native systolic pressure increasing the device’s static compliance to 1.5 mL/mmHg. This study examined the effect of device compliance and vascular compliance on counterpulsating flow augmentation. Both prototypes were tested on a two-element Windkessel model human mock circulatory loop (MCL). The devices were placed just distal to aortic valve and left coronary artery. The MCL mimicked HF with cardiac output of 3 L/min, left ventricular pressure of 85/15 mmHg, aortic pressure of 70/50 mmHg and left coronary artery flow rate of 66 mL/min. The mean arterial pressure (MAP) was calculated to be 57 mmHg. Arterial compliance was set to be1.25 mL/mmHg and 2.5 mL/mmHg. Inflation of the balloon was triggered at the dicrotic notch while deflation was at minimum aortic pressure prior to systole. Important haemodynamics parameters such as left ventricular pressure (LVP), aortic pressure (AoP), cardiac output (CO), left coronary artery flowrate (QcorMean), and dP (Peak aortic diastolic augmentation pressure – AoPmax ) were simultaneously recorded for both non-assisted mode and assisted mode. ANOVA was used to analyse the effect of both factors (balloon and arterial compliance) to flow augmentation. The results showed that for cardiac output and left coronary artery flowrate, there were significant difference between balloon and arterial compliance at p < 0.001. Cardiac output recorded maximum output at 18% for compliant body and stiff arterial compliance. Left coronary artery flowrate also recorded around 20% increase due to compliant body and stiffer arterial compliance. Resistance to blood ejection recorded highest difference for combination of straight body and stiffer arterial compliance. From these results it is clear that both balloon and arterial compliance are statistically significant factors for flow augmentation on peripheral artery and reduction of resistance. Although the result for resistance reduction was different from flow augmentation, these results serves as an important aspect which will influence the future design of the CIMS balloon and its control strategy. References: 1. Mathers C, Boerma T, Fat DM. The Global Burden of disease:2004 update. Geneva: World Heatlh Organization; 2008. 2. Jessup M, Brozena S. Heart Failure. N Engl J Med 2003;348:2007-18.
An investigation of production workers' performance variations and the potential impact of attitudes
Resumo:
In most manufacturing systems the contribution of human labour remains a vital element that affects overall performance and output. Workers’ individual performance is known to be a product of personal attitudes towards work. However, in current system design processes, worker performance variability is assumed to be largely insignificant and the potential impact of worker attitudes is ignored. This paper describes a field study that investigated the extent to which workers’ production task cycle times vary and the degree to which such variations are associated with attitude differences. Results show that worker performance varies significantly, much more than is assumed by contemporary manufacturing system designers and that this appears to be due to production task characteristics. The findings of this research and their implications are discussed.
Resumo:
Considering the rapid growth of call centres (CCs) in India, its implications for businesses in the UK and a scarcity of research on human resource management (HRM) related issues in Indian CCs, this research has two main aims. First, to highlight the nature of HRM systems relevant to Indian call centres. Second, to understand the significance of internal marketing (IM) in influencing the frontline employees’ job-related attitudes and performance. Rewards being an important component of IM, the relationships between different types of rewards as part of an IM strategy, attitudes and performance of employees in Indian CCs will also be examined. Further, the research will investigate which type of commitment mediates the link between rewards and performance and why. The data collection will be via two phases. The first phase would involve a series of in-depth interviews with both the managers and employees to understand the functioning of CCs, and development of suitable HRM systems for the Indian context. The second phase would involve data collection through questionnaires distributed to the frontline employees and supervisors to examine the relationships among IM, employee attitudes and performance. Such an investigation is expected to contribute to development of better theory and practice.