791 resultados para Kerridge’s inaccuracy measure
Resumo:
Purpose - This paper provides a deeper examination of the fundamentals of commonly-used techniques - such as coefficient alpha and factor analysis - in order to more strongly link the techniques used by marketing and social researchers to their underlying psychometric and statistical rationale. Design/methodology approach - A wide-ranging review and synthesis of psychometric and other measurement literature both within and outside the marketing field is used to illuminate and reconsider a number of misconceptions which seem to have evolved in marketing research. Findings - The research finds that marketing scholars have generally concentrated on reporting what are essentially arbitrary figures such as coefficient alpha, without fully understanding what these figures imply. It is argued that, if the link between theory and technique is not clearly understood, use of psychometric measure development tools actually runs the risk of detracting from the validity of the measures rather than enhancing it. Research limitations/implications - The focus on one stage of a particular form of measure development could be seen as rather specialised. The paper also runs the risk of increasing the amount of dogma surrounding measurement, which runs contrary to the spirit of this paper. Practical implications - This paper shows that researchers may need to spend more time interpreting measurement results. Rather than simply referring to precedence, one needs to understand the link between measurement theory and actual technique. Originality/value - This paper presents psychometric measurement and item analysis theory in easily understandable format, and offers an important set of conceptual tools for researchers in many fields. © Emerald Group Publishing Limited.
Resumo:
The following thesis instigates the discussion on corporate social responsibility (CSR) through a review of literature on the conceptualisation, determinants, and remunerations of organisational CSR engagement. The case is made for the need to draw attention to the micro-levels of CSR, and consequently focus on employee social responsibility at multiple levels of analysis. In order to further research efforts in this area, the prerequisite of an employee social responsibility behavioural measurement tool is acknowledged. Accordingly, the subsequent chapters outline the process of scale development and validation, resulting in a robust, reliable and valid employee social responsibility scale. This scale is then put to use in a field study, and the noteworthy roles of the antecedent and boundary conditions of transformational leadership, assigned CSR priority, and CSR climate are confirmed at the group and individual level. Directionality of these relationships is subsequently alluded to in a time-lagged investigation, set within a simulated business environment. The thesis collates and discusses the contributions of the findings from the research series, which highlight a consistent three-way interaction effect of transformational leadership, assigned CSR priority and CSR climate. Specifically, efforts are made to outline various avenues for future research, given the infancy of the micro-level study of employee social responsibility.
Resumo:
In for-profit organizations efficiency measurement with reference to the potential for profit augmentation is particularly important as is its decomposition into technical, and allocative components. Different profit efficiency approaches can be found in the literature to measure and decompose overall profit efficiency. In this paper, we highlight some problems within existing approaches and propose a new measure of profit efficiency based on a geometric mean of input/output adjustments needed for maximizing profits. Overall profit efficiency is calculated through this efficiency measure and is decomposed into its technical and allocative components. Technical efficiency is calculated based on a non-oriented geometric distance function (GDF) that is able to incorporate all the sources of inefficiency, while allocative efficiency is retrieved residually. We also define a measure of profitability efficiency which complements profit efficiency in that it makes it possible to retrieve the scale efficiency of a unit as a component of its profitability efficiency. In addition, the measure of profitability efficiency allows for a dual profitability interpretation of the GDF measure of technical efficiency. The concepts introduced in the paper are illustrated using a numerical example.
Resumo:
This paper describes the development and validation of a multidimensional measure of organizational climate, the Organizational Climate Measure (OCM), based upon Quinn and Rohrbaugh's Competing Values model. A sample of 6869 employees across 55 manufacturing organizations completed the questionnaire. The 17 scales contained within the measure had acceptable levels of reliability and were factorially distinct. Concurrent validity was measured by correlating employees' ratings with managers' and interviewers' descriptions of managerial practices and organizational characteristics. Predictive validity was established using measures of productivity and innovation. The OCM also discriminated effectively between organizations, demonstrating good discriminant validity. The measure offers researchers a relatively comprehensive and flexible approach to the assessment of organizational members' experience and promises applied and theoretical benefits. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Innovation has long been an area of interest to social scientists, and particularly to psychologists working in organisational settings. The team climate inventory (TCI) is a facet-specific measure of team climate for innovation that provides a picture of the level and quality of teamwork in a unit using a series of Likert scales. This paper describes its Italian validation in 585 working group members employed in health-related and other contexts. The data were evaluated by means of factorial analysis (including an analysis of the internal consistency of the scales) and Pearson’s product moment correlations. The results show the internal consistency of the scales and the satisfactory factorial structure of the inventory, despite some variations in the factorial structure mainly due to cultural differences and the specific nature of Italian organisational systems.
Resumo:
Computer simulated trajectories of bulk water molecules form complex spatiotemporal structures at the picosecond time scale. This intrinsic complexity, which underlies the formation of molecular structures at longer time scales, has been quantified using a measure of statistical complexity. The method estimates the information contained in the molecular trajectory by detecting and quantifying temporal patterns present in the simulated data (velocity time series). Two types of temporal patterns are found. The first, defined by the short-time correlations corresponding to the velocity autocorrelation decay times (â‰0.1â€ps), remains asymptotically stable for time intervals longer than several tens of nanoseconds. The second is caused by previously unknown longer-time correlations (found at longer than the nanoseconds time scales) leading to a value of statistical complexity that slowly increases with time. A direct measure based on the notion of statistical complexity that describes how the trajectory explores the phase space and independent from the particular molecular signal used as the observed time series is introduced. © 2008 The American Physical Society.
Resumo:
PURPOSE: To investigate the MacDQoL test-retest reliability and sensitivity to change in vision over a period of one year in a sample of patients with age-related macular degeneration (AMD). DESIGN: A prospective, observational study. METHOD: Patients with AMD from an ophthalmologist's list (n = 135) completed the MacDQoL questionnaire by telephone interview and underwent a vision assessment on two occasions, one year apart. RESULTS: Among participants whose vision was stable over one year (n = 87), MacDQoL scores at baseline and follow-up were highly correlated (r = 0.95; P < .0001). Twelve of the 22 scale items had intraclass correlations of >.80; only two were correlated <.7. There was no difference between baseline and follow-up scores (P = .85), indicating excellent test-retest reliability. Poorer quality of life (QoL) at follow-up, measured by the MacDQoL present QoL overview item, was associated with deterioration in both the better eye and binocular distance visual acuity [VA] (r = 0.29; P = .001, r = 0.21; P = .016, respectively; n = 135). There was a positive correlation between deterioration in the Mac. DQoL average weighted impact score and deterioration in both binocular near VA and reading speed (r = 0.20; P = .019, r = 0.18; P = .041, respectively; n = 135). CONCLUSION: The MacDQoL has excellent test-retest reliability. Its sensitivity to change in vision status was demonstrated in correlational analyses. The measure indicates that the negative impact of AMD on QoL increases with increasing severity of visual impairment.
Resumo:
Data Envelopment Analysis (DEA) is a nonparametric method for measuring the efficiency of a set of decision making units such as firms or public sector agencies, first introduced into the operational research and management science literature by Charnes, Cooper, and Rhodes (CCR) [Charnes, A., Cooper, W.W., Rhodes, E., 1978. Measuring the efficiency of decision making units. European Journal of Operational Research 2, 429–444]. The original DEA models were applicable only to technologies characterized by positive inputs/outputs. In subsequent literature there have been various approaches to enable DEA to deal with negative data. In this paper, we propose a semi-oriented radial measure, which permits the presence of variables which can take both negative and positive values. The model is applied to data on a notional effluent processing system to compare the results with those yielded by two alternative methods for dealing with negative data in DEA: The modified slacks-based model suggested by Sharp et al. [Sharp, J.A., Liu, W.B., Meng, W., 2006. A modified slacks-based measure model for data envelopment analysis with ‘natural’ negative outputs and inputs. Journal of Operational Research Society 57 (11) 1–6] and the range directional model developed by Portela et al. [Portela, M.C.A.S., Thanassoulis, E., Simpson, G., 2004. A directional distance approach to deal with negative data in DEA: An application to bank branches. Journal of Operational Research Society 55 (10) 1111–1121]. A further example explores the advantages of using the new model.
Resumo:
Objectives: To develop an objective measure to enable hospital Trusts to compare their use of antibiotics. Design: Self-completion, postal questionnaire with telephone follow up. Sample: 4 hospital trusts in the English Midlands. Results: The survey showed that it was possible to collect data concerning the number of Defined Daily Doses (DDD's) of quinolone antibiotic dispensed per Finished Consultant Episode (FCE) in each Trust.. In the 4 trusts studied the mean DDD/FCE was 0.197 (range 0.117 to 0.258). This indicates that based on a typical course length of 5 days, 3.9% of patient episodes resulted in the prescription of a quinolone antibiotic. Antibiotic prescribing control measures in each Trust were found to be comparable. Conclusion: The measure will enable Trusts to objectively compare their usage of quinolone antibiotics and use this information to carry out clinical audit should differences be recorded. This is likely to be applicable to other groups of antibiotics.
Resumo:
There has been much recent research into extracting useful diagnostic features from the electrocardiogram with numerous studies claiming impressive results. However, the robustness and consistency of the methods employed in these studies is rarely, if ever, mentioned. Hence, we propose two new methods; a biologically motivated time series derived from consecutive P-wave durations, and a mathematically motivated regularity measure. We investigate the robustness of these two methods when compared with current corresponding methods. We find that the new time series performs admirably as a compliment to the current method and the new regularity measure consistently outperforms the current measure in numerous tests on real and synthetic data.
Resumo:
Personal selling and sales management play a critical role in the short and long term success of the firm, and have thus received substantial academic interest since the 1970s. Sales research has examined the role of the sales manager in some depth, defining a number of key technical and interpersonal roles which sales managers have in influencing sales force effectiveness. However, one aspect of sales management which appears to remain unexplored is that of their resolution of salesperson-related problems. This study represents the first attempt to address this gap by reporting on the conceptual and empirical development of an instrument designed to measure sales managers' problem resolution styles. A comprehensive literature review and qualitative research study identified three key constructs relating to sales managers' problem resolution styles. The three constructs identified were termed; sales manager willingness to respond, sales manager caring, and sales manager aggressiveness. Building on this, existing literature was used to develop a conceptual model of salesperson-specific consequences of the three problem resolution style constructs. The quantitative phase of the study consisted of a mail survey of UK salespeople, achieving a total sample of 140 fully usable responses. Rigorous statistical assessment of the sales manager problem resolution style measures was undertaken, and construct validity examined. Following this, the conceptual model was tested using latent variable path analysis. The results for the model were encouraging overall, and also with regard to the individual hypotheses. Sales manager problem resolution styles were found individually to have significant impacts on the salesperson-specific variables of role ambiguity, emotional exhaustion, job satisfaction, organisational commitment and organisational citizenship behaviours. The findings, theoretical and managerial implications, limitations and directions for future research are discussed.
Resumo:
This paper develops a theory of tourist satisfaction which is tested by using a consumerist gap scale derived from the Ragheb and Beard Leisure Motivation Scale. The sample consists of 1127 holiday makers from the East Midlands, UK. The results confirm the four dimensions of the original scale, and are used to develop clusters of holidaymakers. These clusters are found to be determinants of attitudes towards holiday destination attributes, and are independent of socio-demographic variables. Other determinants of holiday maker satisfaction are also examined. Among the conclusions drawn are the continuing importance of life cycle stages and previous holiday maker satisfaction. There is little evidence found for the travel career hypothesis developed by Professor Philip Pearce.