954 resultados para Electromyography analysis techniques
Resumo:
Relational demographers and dissimilarity researchers contend that group members who are dissimilar (vs. similar) to their peers in terms of a given diversity attribute (e.g. demographics, attitudes, values or traits) feel less attached to their work group, experience less satisfying and more conflicted relationships with their colleagues, and consequently are less effective. However, qualitative reviews suggest empirical findings tend to be weak and inconsistent (Chattopadhyay, Tluchowska and George, 2004; Riordan, 2000; Tsui and Gutek, 1999), and that it remains unclear when, how and to what extent such differences (i.e. relational diversity) affect group members social integration (i.e. attachment with their work group, satisfaction and conflicted relationships with their peers) and effectiveness (Riordan, 2000). This absence of meta-analytically derived effect size estimates and the lack of an integrative theoretical framework leave practitioners with inconclusive advice regarding whether the effects elicited by relational diversity are practically relevant, and if so how these should be managed. The current research develops an integrative theoretical framework, which it tests by using meta-analysis techniques and adding two further empirical studies to the literature. The first study reports a meta-analytic integration of the results of 129 tests of the relationship between relational diversity with social integration and individual effectiveness. Using meta-analytic and structural equation modelling techniques, it shows different effects of surface- and deep-level relational diversity on social integration Specifically, low levels of interdependence accentuated the negative effects of surface-level relational diversity on social integration, while high levels of interdependence accentuated the negative effects of deep-level relational diversity on social integration. The second study builds on a social self-regulation framework (Abrams, 1994) and suggests that under high levels of interdependence relational diversity is not one but two things: visibility and separation. Using ethnicity as a prominent example it was proposed that separation has a negative effect on group members effectiveness leading for those high in visibility and low in separation to overall positive additive effects, while to overall negative additive effects for those low in visibility and high in separation. These propositions were sustained in a sample of 621 business students working in 135 ethnically diverse work groups in a business simulation course over a period of 24 weeks. The third study suggests visibility has a positive effect on group members self-monitoring, while separation has a negative effect. The study proposed that high levels of visibility and low levels of separation lead to overall positive additive effects on self-monitoring but overall negative additive effects for those low in visibility and high in separation. Results from four waves of data on 261 business students working in 69 ethnically diverse work groups in a business simulation course held over a period of 24 weeks support these propositions.
Resumo:
Despite abundant literature on human behaviour in the face of danger, much remains to be discovered. Some descriptive models of behaviour in the face of danger are reviewed in order to identify areas where documentation is lacking. It is argued that little is known about recognition and assessment of danger and yet, these are important aspects of cognitive processes. Speculative arguments about hazard assessment are reviewed and tested against the results of previous studies. Once hypotheses are formulated, the reason for retaining the reportory grid as the main research instrument are outlined, and the choice of data analysis techniques is described. Whilst all samples used repertory grids, the rating scales were different between samples; therefore, an analysis is performed of the way in which rating scales were used in the various samples and of some reasons why the scales were used differently. Then, individual grids are looked into and compared between respondents within each sample; consensus grids are also discussed. the major results from all samples are then contrasted and compared. It was hypothesized that hazard assessment would encompass three main dimensions, i.e. 'controllability', 'severity of consequences' and 'likelihood of occurrence', which would emerge in that order. the results suggest that these dimensions are but facets of two broader dimensions labelled 'scope of human intervention' and 'dangerousness'. It seems that these two dimensions encompass a number of more specific dimensions some of which can be further fragmented. Thus, hazard assessment appears to be a more complex process about which much remains to be discovered. Some of the ways in which further discovery might proceed are discussed.
Resumo:
The application of high-power voltage-source converters (VSCs) to multiterminal dc networks is attracting research interest. The development of VSC-based dc networks is constrained by the lack of operational experience, the immaturity of appropriate protective devices, and the lack of appropriate fault analysis techniques. VSCs are vulnerable to dc-cable short-circuit and ground faults due to the high discharge current from the dc-link capacitance. However, faults occurring along the interconnecting dc cables are most likely to threaten system operation. In this paper, cable faults in VSC-based dc networks are analyzed in detail with the identification and definition of the most serious stages of the fault that need to be avoided. A fault location method is proposed because this is a prerequisite for an effective design of a fault protection scheme. It is demonstrated that it is relatively easy to evaluate the distance to a short-circuit fault using voltage reference comparison. For the more difficult challenge of locating ground faults, a method of estimating both the ground resistance and the distance to the fault is proposed by analyzing the initial stage of the fault transient. Analysis of the proposed method is provided and is based on simulation results, with a range of fault resistances, distances, and operational conditions considered.
Resumo:
This thesis addresses the question of how business schoolsestablished as public privatepartnerships (PPPs) within a regional university in the English-speaking Caribbean survived for over twenty-one years and achieved legitimacy in their environment. The aim of the study was to examine how public and private sector actors contributed to the evolution of the PPPs. A social network perspective provided a broad relational focus from which to explore the phenomenon and engage disciplinary and middle-rangetheories to develop explanations. Legitimacy theory provided an appropriate performance dimension from which to assess PPP success. An embedded multiple-case research design, with three case sites analysed at three levels including the country and university environment, the PPP as a firm and the subgroup level constituted the methodological framing of the research process. The analysis techniques included four methods but relied primarily on discourse and social network analysis of interview data from 40 respondents across the three sites. A staged analysis of the evolution of the firm provided the ‘time and effects’ antecedents which formed the basis for sense-making to arrive at explanations of the public-private relationship-influenced change. A conceptual model guided the study and explanations from the cross-case analysis were used to refine the process model and develop a dynamic framework and set of theoretical propositions that would underpin explanations of PPP success and legitimacy in matched contexts through analytical generalisation. The study found that PPP success was based on different models of collaboration and partner resource contribution that arose from a confluence of variables including the development of shared purpose, private voluntary control in corporate governance mechanisms and boundary spanning leadership. The study contributes a contextual theory that explains how PPPs work and a research agenda of ‘corporate governance as inspiration’ from a sociological perspective of ‘liquid modernity’. Recommendations for policy and management practice were developed.
Resumo:
Golfers, coaches and researchers alike, have all keyed in on golf putting as an important aspect of overall golf performance. Of the three principle putting tasks (green reading, alignment and the putting action phase), the putting action phase has attracted the most attention from coaches, players and researchers alike. This phase includes the alignment of the club with the ball, the swing, and ball contact. A significant amount of research in this area has focused on measuring golfer’s vision strategies with eye tracking equipment. Unfortunately this research suffers from a number of shortcomings, which limit its usefulness. The purpose of this thesis was to address some of these shortcomings. The primary objective of this thesis was to re-evaluate golfer’s putting vision strategies using binocular eye tracking equipment and to define a new, optimal putting vision strategy which was associated with both higher skill and success. In order to facilitate this research, bespoke computer software was developed and validated, and new gaze behaviour criteria were defined. Additionally, the effects of training (habitual) and competition conditions on the putting vision strategy were examined, as was the effect of ocular dominance. Finally, methods for improving golfer’s binocular vision strategies are discussed, and a clinical plan for the optometric management of the golfer’s vision is presented. The clinical management plan includes the correction of fundamental aspects of golfers’ vision, including monocular refractive errors and binocular vision defects, as well as enhancement of their putting vision strategy, with the overall aim of improving performance on the golf course. This research has been undertaken in order to gain a better understanding of the human visual system and how it relates to the sport performance of golfers specifically. Ultimately, the analysis techniques and methods developed are applicable to the assessment of visual performance in all sports.
Resumo:
This paper develops a structured method from the perspective of value to organise and optimise the business processes of a product servitised supply chain (PSSC). This method integrates the modelling tool of e3value with the associated value measurement, evaluation and analysis techniques. It enables visualisation, modelling and optimisation of the business processes of a PSSC. At the same time, the value co-creation and potential contribution to an organisation’s profitability can also be enhanced. The findings not only facilitate organisations that are attempting to adopt servitisation by helping avert any paradox, but also help a servitised organisation to identify the key business processes and clarify their influences to supply chain operations.
Resumo:
Purpose: To compare graticule and image capture assessment of the lower tear film meniscus height (TMH). Methods: Lower tear film meniscus height measures were taken in the right eyes of 55 healthy subjects at two study visits separated by 6 months. Two images of the TMH were captured in each subject with a digital camera attached to a slit-lamp biomicroscope and stored in a computer for future analysis. Using the best of two images, the TMH was quantified by manually drawing a line across the tear meniscus profile, following which the TMH was measured in pixels and converted into millimetres, where one pixel corresponded to 0.0018 mm. Additionally, graticule measures were carried out by direct observation using a calibrated graticule inserted into the same slit-lamp eyepiece. The graticule was calibrated so that actual readings, in 0.03 mm increments, could be made with a 40× ocular. Results: Smaller values of TMH were found in this study compared to previous studies. TMH, as measured with the image capture technique (0.13 ± 0.04 mm), was significantly greater (by approximately 0.01 ± 0.05 mm, p = 0.03) than that measured with the graticule technique (0.12 ± 0.05 mm). No bias was found across the range sampled. Repeatability of the TMH measurements taken at two study visits showed that graticule measures were significantly different (0.02 ± 0.05 mm, p = 0.01) and highly correlated (r = 0.52, p < 0.0001), whereas image capture measures were similar (0.01 ± 0.03 mm, p = 0.16), and also highly correlated (r = 0.56, p < 0.0001). Conclusions: Although graticule and image analysis techniques showed similar mean values for TMH, the image capture technique was more repeatable than the graticule technique and this can be attributed to the higher measurement resolution of the image capture (i.e. 0.0018 mm) compared to the graticule technique (i.e. 0.03 mm). © 2006 British Contact Lens Association.
Resumo:
Aim: To determine the theoretical and clinical minimum image pixel resolution and maximum compression appropriate for anterior eye image storage. Methods: Clinical images of the bulbar conjunctiva, palpebral conjunctiva, and corneal staining were taken at the maximum resolution of Nikon:CoolPix990 (2048 × 1360 pixels), DVC:1312C (1280 × 811), and JAI:CV-S3200 (767 × 569) single chip cameras and the JVC:KYF58 (767 × 569) three chip camera. The images were stored in TIFF format and further copies created with reduced resolution or compressed. The images were then ranked for clarity on a 15 inch monitor (resolution 1280 × 1024) by 20 optometrists and analysed by objective image analysis grading. Theoretical calculation of the resolution necessary to detect the smallest objects of clinical interest was also conducted. Results: Theoretical calculation suggested that the minimum resolution should be ≥579 horizontal pixels at 25 × magnification. Image quality was perceived subjectively as being reduced when the pixel resolution was lower than 767 × 569 (p<0.005) or the image was compressed as a BMP or <50% quality JPEG (p<0.005). Objective image analysis techniques were less susceptible to changes in image quality, particularly when using colour extraction techniques. Conclusion: It is appropriate to store anterior eye images at between 1280 × 811 and 767 × 569 pixel resolution and at up to 1:70 JPEG compression.
Resumo:
Purpose ‐ This study provides empirical evidence for the contextuality of marketing performance assessment (MPA) systems. It aims to introduce a taxonomical classification of MPA profiles based on the relative emphasis placed on different dimensions of marketing performance in different companies and business contexts. Design/methodology/approach ‐ The data used in this study (n=1,157) were collected using a web-based questionnaire, targeted to top managers in Finnish companies. Two multivariate data analysis techniques were used to address the research questions. First, dimensions of marketing performance underlying the current MPA systems were identified through factor analysis. Second, a taxonomy of different profiles of marketing performance measurement was created by clustering respondents based on the relative emphasis placed on the dimensions and characterizing them vis-á-vis contextual factors. Findings ‐ The study identifies nine broad dimensions of marketing performance that underlie the MPA systems in use and five MPA profiles typical of companies of varying sizes in varying industries, market life cycle stages, and competitive positions associated with varying levels of market orientation and business performance. The findings support the previously conceptual notion of contextuality in MPA and provide empirical evidence for the factors that affect MPA systems in practice. Originality/value ‐ The paper presents the first field study of current MPA systems focusing on combinations of metrics in use. The findings of the study provide empirical support for the contextuality of MPA and form a classification of existing contextual systems suitable for benchmarking purposes. Limited evidence for performance differences between MPA profiles is also provided.
Resumo:
In addition to being the chief cause of death in developed countries, systemic hypertension is also a leading cause of visual impairment. The eye is an end arteriolar system and is therefore susceptible to changes in blood pressure. It is also the only place where blood vessels can be clearly viewed by noninvasive techniques. This paper reviews current research into premalignant and malignant retinal signs of systemic hypertension. Previous methods of classifying retinal hypertensive signs are identified, along with more recent image analysis techniques. The need for observing the retinal vasculature as well as measuring blood pressure for monitoring systemic hypertensive patients is discussed in relation to current research. Copyright © 2002 by Current Science Inc.
Resumo:
Cardiovascular disease and stroke continue to be the chief causes of death in developed countries and one of the leading causes of visual impairment. The individual with systemic hypertension may remain asymptomatic for many years. Systemic mortality and morbidity are markedly higher for hypertensives than normotensives, but can be significantly reduced by early diagnosis and then efficient management. However, the ability of Optometrists to detect and appropriately refer systemic hypertensives remains generally poor. This review examines the disease, its effects and detection by observation of the retinal signs, particularly those considered to be pre-malignant. Previous methods of classifying retinal hypertensive signs are discussed along with more recent image analysis techniques. The role of the optometrist in detecting, monitoring and appropriate referral of systemic hypertensives is discussed in relation to current research. (C) 2001 The College of Optometrists. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
* This study was supported in part by the Natural Sciences and Engineering Research Council of Canada, and by the Gastrointestinal Motility Laboratory (University of Alberta Hospitals) in Edmonton, Alberta, Canada.
Resumo:
ACM Computing Classification System (1998): J.2.
Resumo:
2000 Mathematics Subject Classification: 62P10, 62H30
Resumo:
This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.